FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayKitPloit - PenTest Tools!

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



CureIAM - Clean Accounts Over Permissions In GCP Infra At Scale

By: Zion3R

Clean up of over permissioned IAM accounts on GCP infra in an automated way

CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.


Key features

Discover what makes CureIAM scalable and production grade.

  • Config driven : The entire workflow of CureIAM is config driven. Skip to Config section to know more about it.
  • Scalable : Its is designed to scale because of its plugin driven, multiprocess and multi-threaded approach.
  • Handles Scheduling: Scheduling part is embedded in CureIAM code itself, configure the time, and CureIAM will run daily at that time note.
  • Plugin driven: CureIAM codebase is completely plugin oriented, which means, one can plug and play the existing plugins or create new to add more functionality to it.
  • Track actionable insights: Every action that CureIAM takes, is recorded for audit purpose, It can do that in file store and in elasticsearch store. If you want you can build other store plugins to push that to other stores for tracking purposes.
  • Scoring and Enforcement: Every recommendation that is fetch by CureIAM is scored against various parameters, after that couple of scores like safe_to_apply_score, risk_score, over_privilege_score. Each score serves a different purpose. For safe_to_apply_score identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml config file.

Usage

Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml, ~/.CureIAM.yaml, ~/CureIAM.yaml, or CureIAM.yaml and there is Service account JSON file present in current directory with name preferably cureiamSA.json. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.

# Install necessary dependencies
$ pip install -r requirements.txt

# Run CureIAM now
$ python -m CureIAM -n

# Run CureIAM process as schedular
$ python -m CureIAM

# Check CureIAM help
$ python -m CureIAM --help

CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.

# Build docker image from dockerfile
$ docker build -t cureiam .

# Run the image, as schedular
$ docker run -d cureiam

# Run the image now
$ docker run -f cureiam -m cureiam -n

Config

CureIAM.yaml configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.

  1. Let's configure first section, which is logging configuration and scheduler configuration.
  logger:
version: 1

disable_existing_loggers: false

formatters:
verysimple:
format: >-
[%(process)s]
%(name)s:%(lineno)d - %(message)s
datefmt: "%Y-%m-%d %H:%M:%S"

handlers:
rich_console:
class: rich.logging.RichHandler
formatter: verysimple

file:
class: logging.handlers.TimedRotatingFileHandler
formatter: simple
filename: /tmp/CureIAM.log
when: midnight
encoding: utf8
backupCount: 5

loggers:
adal-python:
level: INFO

root:
level: INFO
handlers:
- rich_console
- file

schedule: "16:00"

This subsection of config uses, Rich logging module and schedules CureIAM to run daily at 16:00.

  1. Next section is configure different modules, which we MIGHT use in pipeline. This falls under plugins section in CureIAM.yaml. You can think of this section as declaration for different plugins.
  plugins:
gcpCloud:
plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
params:
key_file_path: cureiamSA.json

filestore:
plugin: CureIAM.plugins.files.filestore.FileStore

gcpIamProcessor:
plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
params:
mode_scan: true
mode_enforce: true
enforcer:
key_file_path: cureiamSA.json
allowlist_projects:
- alpha
blocklist_projects:
- beta
blocklist_accounts:
- foo@bar.com
allowlist_account_types:
- user
- group
- serviceAccount
blocklist_account_types:
- None
min_safe_to_apply_score_user: 0
min_safe_to_apply_scor e_group: 0
min_safe_to_apply_score_SA: 50

esstore:
plugin: CureIAM.plugins.elastic.esstore.EsStore
params:
# Change http to https later if your elastic are using https
scheme: http
host: es-host.com
port: 9200
index: cureiam-stg
username: security
password: securepassword

Each of these plugins declaration has to be of this form:

  plugins:
<plugin-name>:
plugin: <class-name-as-python-path>
params:
param1: val1
param2: val2

For example, for plugins CureIAM.stores.esstore.EsStore which is this file and class EsStore. All the params which are defined in yaml has to match the declaration in __init__() function of the same plugin class.

  1. Once plugins are defined , next step is to define how to define pipeline for auditing. And it goes like this:
  audits:
IAMAudit:
clouds:
- gcpCloud
processors:
- gcpIamProcessor
stores:
- filestore
- esstore

Multiple Audits can be created out of this. The one created here is named IAMAudit with three plugins in use, gcpCloud, gcpIamProcessor, filestores and esstore. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.

  1. Tell CureIAM to run the Audits defined in previous step.
  run:
- IAMAudits

And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.

Dashboard

The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.

Contribute

[Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!

Credits

Gojek Product Security Team

Demo

<>

=============

NEW UPDATES May 2023 0.2.0

Refactoring

  • Breaking down the large code into multiple small function
  • Moving all plugins into plugins folder: Esstore, files, Cloud and GCP.
  • Adding fixes into zero divide issues
  • Migration to new major version of elastic
  • Change configuration in CureIAM.yaml file
  • Tested in python version 3.9.X

Library Updates

Adding the version in library to avoid any back compatibility issues.

  • Elastic==8.7.0 # previously 7.17.9
  • elasticsearch==8.7.0
  • google-api-python-client==2.86.0
  • PyYAML==6.0
  • schedule==1.2.0
  • rich==13.3.5

Docker Files

  • Adding Docker Compose for local Elastic and Kibana in elastic
  • Adding .env-ex change .env-ex to .env to before running the docker
Running docker compose: docker-compose -f docker_compose_es.yaml up 

Features

  • Adding the capability to run scan without applying the recommendation. By default, if mode_scan is false, mode_enforce won't be running.
      mode_scan: true
mode_enforce: false
  • Turn off the email function temporarily.


HardHatC2 - A C# Command And Control Framework

By: Zion3R


A cross-platform, collaborative, Command & Control framework written in C#, designed for red teaming and ease of use.

HardHat is a multiplayer C# .NET-based command and control framework. Designed to aid in red team engagements and penetration testing. HardHat aims to improve the quality of life factors during engagements by providing an easy-to-use but still robust C2 framework.
It contains three primary components, an ASP.NET teamserver, a blazor .NET client, and C# based implants.


Release Tracking

Alpha Release - 3/29/23 NOTE: HardHat is in Alpha release; it will have bugs, missing features, and unexpected things will happen. Thank you for trying it, and please report back any issues or missing features so they can be addressed.

Community

Discord Join the community to talk about HardHat C2, Programming, Red teaming and general cyber security things The discord community is also a great way to request help, submit new features, stay up to date on the latest additions, and submit bugs.

Features

Teamserver & Client

  • Per-operator accounts with account tiers to allow customized access control and features, including view-only guest modes, team-lead opsec approval(WIP), and admin accounts for general operation management.
  • Managers (Listeners)
  • Dynamic Payload Generation (Exe, Dll, shellcode, PowerShell command)
  • Creation & editing of C2 profiles on the fly in the client
  • Customization of payload generation
    • sleep time/jitter
    • kill date
    • working hours
    • type (Exe, Dll, Shellcode, ps command)
    • Included commands(WIP)
    • option to run confuser
  • File upload & Downloads
  • Graph View
  • File Browser GUI
  • Event Log
  • JSON logging for events & tasks
  • Loot tracking (Creds, downloads)
  • IOC tracing
  • Pivot proxies (SOCKS 4a, Port forwards)
  • Cred store
  • Autocomplete command history
  • Detailed help command
  • Interactive bash terminal command if the client is on linux or powershell on windows, this allows automatic parsing and logging of terminal commands like proxychains
  • Persistent database storage of teamserver items (User accounts, Managers, Engineers, Events, tasks, creds, downloads, uploads, etc. )
  • Recon Entity Tracking (track info about users/devices, random metadata as needed)
  • Shared files for some commands (see teamserver page for details)
  • tab-based interact window for command issuing
  • table-based output option for some commands like ls, ps, etc.
  • Auto parsing of output from seatbelt to create "recon entities" and fill entries to reference back to later easily
  • Dark and Light
    ๏คฎ
    theme

Engineers

  • C# .NET framework implant for windows devices, currently only CLR/.NET 4 support
  • atm only one implant, but looking to add others
  • It can be generated as EXE, DLL, shellcode, or PowerShell stager
  • Rc4 encryption of payload memory & heap when sleeping (Exe / DLL only)
  • AES encryption of all network communication
  • ConfuserEx integration for obfuscation
  • HTTP, HTTPS, TCP, SMB communication
    • TCP & SMB can work P2P in a bind or reverse setups
  • Unique per implant key generated at compile time
  • multiple callback URI's depending on the C2 profile
  • P/Invoke & D/Invoke integration for windows API calls
  • SOCKS 4a support
  • Reverse Port Forward & Port Forwards
  • All commands run as async cancellable jobs
    • Option to run commands sync if desired
  • Inline assembly execution & inline shellcode execution
  • DLL Injection
  • Execute assembly & Mimikatz integration
  • Mimikatz is not built into the implant but is pushed when specific commands are issued
  • Various localhost & network enumeration tools
  • Token manipulation commands
    • Steal Token Mask(WIP)
  • Lateral Movement Commands
  • Jump (psexec, wmi, wmi-ps, winrm, dcom)
  • Remote Execution (WIP)
  • AMSI & ETW Patching
  • Unmanaged Powershell
  • Script Store (can load multiple scripts at once if needed)
  • Spawn & Inject
    • Spawn-to is configurable
  • run, shell & execute

Documentation

documentation can be found at docs

Getting Started

Prerequisites

  • Installation of the .net 7 SDK from Microsoft
  • Once installed, the teamserver and client are started with dotnet run

Teamserver

To configure the team server's starting address (where clients will connect), edit the HardHatC2\TeamServer\Properties\LaunchSettings.json changing the "applicationUrl": "https://127.0.0.1:5000" to the desired location and port. start the teamserver with dotnet run from its top-level folder ../HrdHatC2/Teamserver/

HardHat Client

  1. When starting the client to set the target teamserver location, include it in the command line dotnet run https://127.0.0.1:5000 for example
  2. open a web browser and navigate to https://localhost:7096/ if this works, you should see the login page
  3. Log in with the HardHat_Admin user (Password is printed on first TeamServer startup)
  4. Navigate to the settings page & create a new user if successful, a message should appear, then you may log in with that account to access the full client

Contributions & Bug Reports

Code contributions are welcome feel free to submit feature requests, pull requests or send me your ideas on discord.



โŒ