FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

BreachForums Returns Just Weeks After FBI Seizure - Honeypot or Blunder?

The online criminal bazaar BreachForums has been resurrected merely two weeks after a U.S.-led coordinated law enforcement action dismantled and seized control of its infrastructure. Cybersecurity researchers and dark web trackers Brett Callow, Dark Web Informer, and FalconFeeds revealed the site's online return at breachforums[.]st – one of the dismantled sites – by a user named ShinyHunters,

Hackers Exploiting WP-Automatic Plugin Bug to Create Admin Accounts on WordPress Sites

Threat actors are attempting to actively exploit a critical security flaw in the ValvePress Automatic plugin for WordPress that could allow site takeovers. The shortcoming, tracked as CVE-2024-27956, carries a CVSS score of 9.9 out of a maximum of 10. It impacts all versions of the plugin prior to 3.92.0. The issue has been resolved in version 3.92.1 released on February 27, 2024,

Atlassian Releases Fixes for Over 2 Dozen Flaws, Including Critical Bamboo Bug

Atlassian has released patches for more than two dozen security flaws, including a critical bug impacting Bamboo Data Center and Server that could be exploited without requiring user interaction. Tracked as CVE-2024-1597, the vulnerability carries a CVSS score of 10.0, indicating maximum severity. Described as an SQL injection flaw, it's rooted in a dependency called org.postgresql:

Hackers Exploit Job Boards, Stealing Millions of Resumes and Personal Data

Employment agencies and retail companies chiefly located in the Asia-Pacific (APAC) region have been targeted by a previously undocumented threat actor known as ResumeLooters since early 2023 with the goal of stealing sensitive data. Singapore-headquartered Group-IB said the hacking crew's activities are geared towards job search platforms and the theft of resumes, with as many as 65

Turkish Hackers Exploiting Poorly Secured MS SQL Servers Across the Globe

Poorly secured Microsoft SQL (MS SQL) servers are being targeted in the U.S., European Union, and Latin American (LATAM) regions as part of an ongoing financially motivated campaign to gain initial access. β€œThe analyzed threat campaign appears to end in one of two ways, either the selling of β€˜access’ to the compromised host, or the ultimate delivery of ransomware payloads,” Securonix researchers

Alert: New Vulnerabilities Discovered in QNAP and Kyocera Device Manager

A security flaw has been disclosed in Kyocera’s Device Manager product that could be exploited by bad actors to carry out malicious activities on affected systems. "This vulnerability allows attackers to coerce authentication attempts to their own resources, such as a malicious SMB share, to capture or relay Active Directory hashed credentials if the β€˜Restrict NTLM: Outgoing NTLM

KnowsMore - A Swiss Army Knife Tool For Pentesting Microsoft Active Directory (NTLM Hashes, BloodHound, NTDS And DCSync)

By: Zion3R


KnowsMore officially supports Python 3.8+.

Main features

  • Import NTLM Hashes from .ntds output txt file (generated by CrackMapExec or secretsdump.py)
  • Import NTLM Hashes from NTDS.dit and SYSTEM
  • Import Cracked NTLM hashes from hashcat output file
  • Import BloodHound ZIP or JSON file
  • BloodHound importer (import JSON to Neo4J without BloodHound UI)
  • Analyse the quality of password (length , lower case, upper case, digit, special and latin)
  • Analyse similarity of password with company and user name
  • Search for users, passwords and hashes
  • Export all cracked credentials direct to BloodHound Neo4j Database as 'owned object'
  • Other amazing features...

Getting stats

knowsmore --stats

This command will produce several statistics about the passwords like the output bellow

weak passwords by company name similarity +-------+--------------+---------+----------------------+-------+ | top | password | score | company_similarity | qty | |-------+--------------+---------+----------------------+-------| | 1 | company123 | 7024 | 80 | 1111 | | 2 | Company123 | 5209 | 80 | 824 | | 3 | company | 3674 | 100 | 553 | | 4 | Company@10 | 2080 | 80 | 329 | | 5 | company10 | 1722 | 86 | 268 | | 6 | Company@2022 | 1242 | 71 | 202 | | 7 | Company@2024 | 1015 | 71 | 165 | | 8 | Company2022 | 978 | 75 | 157 | | 9 | Company10 | 745 | 86 | 116 | | 10 | Company21 | 707 | 86 | 110 | +-------+--------------+---------+----------------------+-------+ " dir="auto">
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore

[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db

[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+

[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+

[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+

Installation

Simple

pip3 install --upgrade knowsmore

Note: If you face problem with dependency version Check the Virtual ENV file

Execution Flow

There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:

  1. Create database file
  2. Import BloodHound files
    1. Domains
    2. GPOs
    3. OUs
    4. Groups
    5. Computers
    6. Users
  3. Import NTDS file
  4. Import cracked hashes

Create database file

All data are stored in a SQLite Database

knowsmore --create-db

Importing BloodHound files

We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface

# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip

# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json

Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.

Sync data to Neo4j BloodHound database

# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678

Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.

Importing NTDS file

Option 1

Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry

knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM

Option 2

Note: First use the secretsdump to extract ntds hashes with the command bellow

secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name

After that import

knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds

Generating a custom wordlist

knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name

Importing cracked hashes

Cracking hashes

First extract all hashes to a txt file

# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"

# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt

In order to crack the hashes, I usually use hashcat with the command bellow

# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"

# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a

importing hashcat output file

knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt

Note: Change clientCompanyName to name of your company

Wipe sensitive data

As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.

Note: This command will keep all generated statistics and imported user data.

knowsmore --wipe

BloodHound Mark as owned

One User

During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database

knowsmore --user-pass --username administrator --password Sec4US@2023

# or adding the company name

knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us

Integrate all credentials cracked to Neo4j Bloodhound database

knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456

To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.

server.bolt.listen_address=0.0.0.0:7687


MongoDB Suffers Security Breach, Exposing Customer Data

MongoDB on Saturday disclosed it's actively investigating a security incident that has led to unauthorized access to "certain" corporate systems, resulting in the exposure of customer account metadata and contact information. The American database software company said it first detected anomalous activity on December 13, 2023, and that it immediately activated its incident response

Mirai-based Botnet Exploiting Zero-Day Bugs in Routers and NVRs for Massive DDoS Attacks

An active malware campaign is leveraging two zero-day vulnerabilities with remote code execution (RCE) functionality to rope routers and video recorders into a Mirai-based distributed denial-of-service (DDoS) botnet. β€œThe payload targets routers and network video recorder (NVR) devices with default admin credentials and installs Mirai variants when successful,” AkamaiΒ saidΒ in an advisory

Critical Flaws Discovered in Veeam ONE IT Monitoring Software – Patch Now

Veeam has releasedΒ security updatesΒ to address four flaws in its ONE IT monitoring and analytics platform, two of which are rated critical in severity. The list of vulnerabilities is as follows - CVE-2023-38547Β (CVSS score: 9.9) - An unspecified flaw that can be leveraged by an unauthenticated user to gain information about the SQL server connection Veeam ONE uses to access its configuration

DorXNG - Next Generation DorX. Built By Dorks, For Dorks

By: Zion3R


DorXNG is a modern solution for harvesting OSINT data using advanced search engine operators through multiple upstream search providers. On the backend it leverages a purpose built containerized image of SearXNG, a self-hosted, hackable, privacy focused, meta-search engine.

Our SearXNG implementation routes all search queries over the Tor network while refreshing circuits every ten seconds with Tor's MaxCircuitDirtiness configuration directive. We have also disabled all of SearXNG's client side timeout features. These settings allow for evasion of search engine restrictions commonly encountered while issuing many repeated search queries.

The DorXNG client application is written in Python3, and interacts with the SearXNG API to issue search queries concurrently. It can even issue requests across multiple SearXNG instances. The resulting search results are stored in a SQLite3 database.


We have enabled every supported upstream search engine that allows advanced search operator queries:

  • Google
  • DuckDuckGo
  • Qwant
  • Bing
  • Brave
  • Startpage
  • Yahoo

For more information about what search engines SearXNG supports See: Configured Engines

Setup ️

LINUX ONLY ** Sorry Normies **

Install DorXNG

git clone https://github.com/researchanddestroy/dorxng
cd dorxng
pip install -r requirements.txt
./DorXNG.py -h

Download and Run Our Custom SearXNG Docker Container (at least one). Multiple SearXNG instances can be used. Use the --serverlist option with DorXNG. See: server.lst

When starting multiple containers wait at least a few seconds between starting each one.

docker run researchanddestroy/searxng:latest

If you would like to build the container yourself:

git clone https://github.com/researchanddestroy/searxng # The URL must be all lowercase for the build process to complete
cd searxng
DOCKER_BUILDKIT=1 make docker.build
docker images
docker run <image-id>

By default DorXNG has a hard coded server variable in parse_args.py which is set to the IP address that Docker will assign to the first container you run on your machine 172.17.0.2. This can be changed, or overwritten with --server or --serverlist.

Start Issuing Search Queries

./DorXNG.py -q 'search query'

Query the DorXNG Database

./DorXNG.py -D 'regex search string'

Instructions ο“–

-h, --help            show this help message and exit
-s SERVER, --server SERVER
DorXNG Server Instance - Example: 'https://172.17.0.2/search'
-S SERVERLIST, --serverlist SERVERLIST
Issue Search Queries Across a List of Servers - Format: Newline Delimited
-q QUERY, --query QUERY
Issue a Search Query - Examples: 'search query' | '!tch search query' | 'site:example.com intext:example'
-Q QUERYLIST, --querylist QUERYLIST
Iterate Through a Search Query List - Format: Newline Delimited
-n NUMBER, --number NUMBER
Define the Number of Page Result Iterations
-c CONCURRENT, --concurrent CONCURRENT
Define the Number of Concurrent Page Requests
-l LIMITDATABASE, --limitdatabase LIMITDATABASE
Set Maximum Database Size Limit - Starts New Database After Exceeded - Example: -- limitdatabase 10 (10k Database Entries) - Suggested Maximum Database Size is 50k
when doing Deep Recursion
-L LOOP, --loop LOOP Define the Number of Main Function Loop Iterations - Infinite Loop with 0
-d DATABASE, --database DATABASE
Specify SQL Database File - Default: 'dorxng.db'
-D DATABASEQUERY, --databasequery DATABASEQUERY
Issue Database Query - Format: Regex
-m MERGEDATABASE, --mergedatabase MERGEDATABASE
Merge SQL Database File - Example: --mergedatabase database.db
-t TIMEOUT, --timeout TIMEOUT
Specify Timeout Interval Between Requests - Default: 4 Seconds - Disable with 0
-r NONEWRESULTS, --nonewresults NONEWRESULTS
Specify Number of Iterations with No New Results - Default: 4 (3 Attempts) - Disable with 0
-v, --verbose Enable Verbose Output
-vv, --veryverbose Enable Very Ver bose Output - Displays Raw JSON Output

Tips 

Sometimes you will hit a Tor exit node that is already shunted by upstream search providers, causing you to receive a minimal amount of search results. Not to worry... Just keep firing off queries. ο˜‰

Keep your DorXNG SQL database file and rerun your command, or use the --loop switch to iterate the main function repeatedly. 

Most often, the more passes you make over a search query the more results you'll find. 

Also keep in mind that we have made a sacrifice in speed for a higher degree of data output. This is an OSINT project after all. ο”ŽοŒŽ

Each search query you make is being issued to 7 upstream search providers... Especially with --concurrent queries this generates a lot of upstream requests... So have patience.

Keep in mind that DorXNG will continue to append new search results to your database file. Use the --database switch to specify a database filename, the default filename is dorxng.db. This probably doesn't matter for most, but if you want to keep your OSINT investigations seperate it's there for you.

Four concurrent search requests seems to be the sweet spot. You can issue more, but the more queries you issue at a time the longer it takes to receive results. It also increases the likelihood you receive HTTP/429 Too Many Requests responses from upstream search providers on that specific Tor circuit.

If you start multiple SearXNG Docker containers too rapidly Tor connections may fail to establish. While initializing a container, a valid response from the Tor Connectivity Check function looks like this:

If you see anything other than that, or if you start to see HTTP/500 response codes coming back from the SearXNG monitor script (STDOUT in the container), kill the Docker container and spin up a new one.

HTTP/504 Gateway Time-out response codes within DorXNG are expected sometimes. This means the SearXNG instance did not receive a valid response back within one minute. That specific Tor curcuit is probably too slow. Just keep going!

There really isn't a reason to run a ton of these containers... Yet... ο˜‰ How many you run really depends on what you're doing. Each container uses approximately 1.25GBs of RAM.

Running one container works perfectly fine, except you will likely miss search results. So use --loop and do not disable --timeout.

Running multiple containers is nice because each has its own Tor curcuit thats refreshing every 10 seconds.

When running --serverlist mode disable the --timeout feature so there is no delay between requests (The default delay interval is 4 seconds).

Keep in mind that the more containers you run the more memory you will need. This goes for deep recursion too... We have disabled Python's maximum recursion limit... ο”ο˜‰

The more recursions your command goes through without returning to main the more memory the process will consume. You may come back to find that the process has crashed with a Killed error message. If this happens your machine ran out of memory and killed the process. Not to worry though... Your database file is still good. 

If your database file gets exceptionally large it inevitably slows down the program and consumes more memory with each iteration...

Those Python Stack Frames are Thicc... ο‘ο˜…

We've seen a marked drop in performance with database files that exceed approximately 50 thousand entries.

The --limitdatabase option has been implemented to mitigate some of these memory consumption issues. Use it in combination with --loop to break deep recursive iteration inside iterator.py and restart from main right where you left off.

Once you have a series of database files you can merge them all (one at a time) with --mergedatabase. You can even merge them all into a new database file if you specify an unused filename with --database.

DO NOT merge data into a database that is currently being used by a running DorXNG process. This may cause errors and could potentially corrupt the database.

The included query.lst file is every dork that currently exists on the Google Hacking Database (GHDB). See: ghdb_scraper.py

We've already run through it for you... ο˜‰ Our ghdb.db file contains over one million entries and counting!  You can download it here ghdb.db if you'd like a copy. ο˜‰

Example of querying the ghdb.db database:

./DorXNG.py -d ghdb.db -D '^http.*\.sql$'

A rewrite of DorXNG in Golang is already in the works. ο˜‰ (GorXNG? | DorXNGNG?) ο˜†

We're gonna need more dorks... ο˜… Check out DorkGPT ο‘€

Examples ο’‘

Single Search Query

./DorXNG.py -q 'search query'

Concurrent Search Queries

./DorXNG.py -q 'search query' -c4

Page Iteration Mode

./DorXNG.py -q 'search query' -n4

Iterative Concurrent Search Queries

./DorXNG.py -q 'search query' -c4 -n64

Server List Iteration Mode

./DorXNG.py -S server.lst -q 'search query' -c4 -n64 -t0

Query List Iteration Mode

./DorXNG.py -Q query.lst -c4 -n64

Query and Server List Iteration

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0

Main Function Loop Iteration Mode

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L4

Infinite Main Function Loop Iteration Mode with a Database File Size Limit Set to 10k Entries

./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L0 -l10

Merging a Database (One at a Time) into a New Database File

./DorXNG.py -d new-database.db -m dorxng.db

Merge All Database Files in the Current Working Directory into a New Database File

for i in `ls *.db`; do ./DorXNG.py -d new-database.db -m $i; done

Query a Database

./DorXNG.py -d new-database.db -D 'regex search string'


ICMPWatch - ICMP Packet Sniffer

By: Zion3R


ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.


Features

  • Capture and analyze ICMP Echo Request and Echo Reply packets.
  • Display detailed information about each ICMP packet, including source and destination IP addresses, MAC addresses, packet size, ICMP type, and payload content.
  • Save captured packet information to a text file.
  • Store captured packet information in an SQLite database.
  • Save captured packets to a PCAP file for further analysis.
  • Support for custom packet filtering based on source and destination IP addresses.
  • Colorful console output using ANSI escape codes.
  • User-friendly command-line interface.

Requirements

  • Python 3.7+
  • scapy 2.4.5 or higher
  • colorama 0.4.4 or higher

Installation

  1. Clone this repository:
git clone https://github.com/HalilDeniz/ICMPWatch.git
  1. Install the required dependencies:
pip install -r requirements.txt

Usage

python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
  • -v or --verbose: Show verbose packet details.
  • -t or --timeout: Sniffing timeout in seconds (default is 300 seconds).
  • -f or --filter: BPF filter for packet sniffing (default is "icmp").
  • -o or --output: Output file to save captured packets.
  • --type: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).
  • --src-ip: Source IP address to filter.
  • --dst-ip: Destination IP address to filter.
  • -i or --interface: Network interface to capture packets (required).
  • -db or --database: Store captured packets in an SQLite database.
  • -c or --capture: Capture file to save packets in pcap format.

Press Ctrl+C to stop the sniffing process.

Examples

  • Capture ICMP packets on the "eth0" interface:
python icmpwatch.py -i eth0
  • Sniff ICMP traffic on interface "eth0" and save the results to a file:
python dnssnif.py -i eth0 -o icmp_results.txt
  • Filtering by Source and Destination IP:
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
  • Filtering ICMP Echo Requests:
python icmpwatch.py -i eth0 --type 8
  • Saving Captured Packets
python icmpwatch.py -i eth0 -c captured_packets.pcap


New SkidMap Linux Malware Variant Targeting Vulnerable Redis Servers

By: THN
VulnerableΒ Redis servicesΒ have been targeted by a "new, improved, dangerous" variant of a malware called SkidMap that's engineered to target a wide range of Linux distributions. "The malicious nature of this malware is to adapt to the system on which it is executed," Trustwave security researcher Radoslaw ZdonczykΒ saidΒ in an analysis published last week. Some of the Linux distribution SkidMap

Grepmarx - A Source Code Static Analysis Platform For AppSec Enthusiasts


Grepmarx is a web application providing a single platform to quickly understand, analyze and identify vulnerabilities in possibly large and unknown code bases.

Features

SAST (Static Analysis Security Testing) capabilities:

  • Multiple languages support: C/C++, C#, Go, HTML, Java, Kotlin, JavaScript, TypeScript, OCaml, PHP, Python, Ruby, Bash, Rust, Scala, Solidity, Terraform, Swift
  • Multiple frameworks support: Spring, Laravel, Symfony, Django, Flask, Node.js, jQuery, Express, Angular...
  • 1600+ existing analysis rules
  • Easily extend analysis rules using Semgrep syntax: https://semgrep.dev/editor
  • Manage rules in rule packs to tailor code scanning

SCA (Software Composition Analysis) capabilities:

  • Multiple package-dependency formats support: NPM, Maven, Gradle, Composer, pip, Gopkg, Gem, Cargo, NuPkg, CSProj, PubSpec, Cabal, Mix, Conan, Clojure, Docker, GitHub Actions, Jenkins HPI, Kubernetes
  • SBOM (Software Bill-of-Materials) generation (CycloneDX compliant)

Extra

  • Analysis workbench designed to efficiently browse scan results
  • Scan code that doesn't compile
  • Comprehensive LOC (Lines of Code) counter
  • Inspector: automatic application features discovery
  • ... and a Dark Mode

Screenshots

Scan customization Analysis workbench Rule pack edition

Execution

Grepmarx is provided with a configuration to be executed in Docker and Gunicorn.

Docker execution


Make sure you have docker-composer installed on the system, and the docker daemon is running. The application can then be easily executed in a docker container. The steps:

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Start the app in Docker

$ sudo docker-compose pull && sudo docker-compose build && sudo docker-compose up -d

Visit http://localhost:5000 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Gunicorn


Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. A supervisor configuration file is provided to start it along with the required Celery worker (used for security scans queuing).

Install using pip

$ pip install gunicorn supervisor

Start the app using gunicorn binary

$ supervisord -c supervisord.conf

Visit http://localhost:8001 in your browser. The app should be up & running.

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Build from sources

Get the code

$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx

Install virtualenv modules

$ virtualenv env
$ source env/bin/activate

Install Python modules

PostgreSQL connector (Production) $ # pip install -r requirements-pgsql.txt" dir="auto">
$ # SQLite Database (Development)
$ pip3 install -r requirements.txt
$ # OR with PostgreSQL connector (Production)
$ # pip install -r requirements-pgsql.txt

Install additionnal requirements

# Dependency scan (cdxgen / depscan) requirements
$ sudo apt install npm openjdk-17-jdk maven gradle golang composer
$ sudo npm install -g @cyclonedx/cdxgen
$ pip install appthreat-depscan

A Redis server is required to queue security scans. Install the redis package with your favorite distro package manager, then:

$ redis-server

Set the FLASK_APP environment variable

$ export FLASK_APP=run.py
$ # Set up the DEBUG environment
$ # export FLASK_ENV=development

Start the celery worker process

$ celery -A app.celery_worker.celery worker --pool=prefork --loglevel=info --detach

Start the application (development mode)

$ # --host=0.0.0.0 - expose the app on all network interfaces (default 127.0.0.1)
$ # --port=5000 - specify the app port (default 5000)
$ flask run --host=0.0.0.0 --port=5000

Access grepmarx in browser: http://127.0.0.1:5000/

Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.

Credits & Links



Grepmarx - Provided by Orange Cyberdefense.



Top 5 Critical CVEs Vulnerability from 2019 That Every CISO Must Patch Before He Gets Fired !

The number of vulnerabilities continues to increase so much that the technical teams in charge of the patch management find themselves drowning in a myriad of critical and urgent tasks. Therefore we have taken the time to review the profile of the most critical vulnerabilities & issues that impacted year 2019. After this frenzy during [&hellip

Enumdb Beta – Brute Force MySQL and MSSQL Databases

Enumdb is brute force and post exploitation tool for MySQL and MSSQL databases. When provided a list of usernames and/or passwords, it will cycle through each looking for valid credentials. By...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
❌