Smap is a replica of Nmap which uses shodan.io's free API for port scanning. It takes same command line arguments as Nmap and produces the same output which makes it a drop-in replacament for Nmap.
You can download a pre-built binary from here and use it right away.
go install -v github.com/s0md3v/smap/cmd/smap@latest
Confused or something not working? For more detailed instructions, click here
Smap is available on AUR as smap-git (builds from source) and smap-bin (pre-built binary).
Smap is also avaible on Homebrew.
brew update
brew install smap
Smap takes the same arguments as Nmap but options other than -p
, -h
, -o*
, -iL
are ignored. If you are unfamiliar with Nmap, here's how to use Smap.
smap 127.0.0.1 127.0.0.2
You can also use a list of targets, seperated by newlines.
smap -iL targets.txt
Supported formats
1.1.1.1 // IPv4 address
example.com // hostname
178.23.56.0/8 // CIDR
Smap supports 6 output formats which can be used with the -o*
as follows
smap example.com -oX output.xml
If you want to print the output to terminal, use hyphen (-
) as filename.
Supported formats
oX // nmap's xml format
oG // nmap's greppable format
oN // nmap's default format
oA // output in all 3 formats above at once
oP // IP:PORT pairs seperated by newlines
oS // custom smap format
oJ // json
Note: Since Nmap doesn't scan/display vulnerabilities and tags, that data is not available in nmap's formats. Use
-oS
to view that info.
Smap scans these 1237 ports by default. If you want to display results for certain ports, use the -p
option.
smap -p21-30,80,443 -iL targets.txt
Since Smap simply fetches existent port data from shodan.io, it is super fast but there's more to it. You should use Smap if:
BlackStone project or "BlackStone Project" is a tool created in order to automate the work of drafting and submitting a report on audits of ethical hacking or pentesting.
In this tool we can register in the database the vulnerabilities that we find in the audit, classifying them by internal, external audit or wifi, in addition, we can put your description and recommendation, as well as the level of severity and effort for its correction. This information will then help us generate in the report a criticality table as a global summary of the vulnerabilities found.
We can also register a company and, just by adding its web page, the tool will be able to find subdomains, telephone numbers, social networks, employee emails...
Install Docker
git clone https://github.com/micro-joan/BlackStone
cd BlackStone
docker-compose up -d
First you need to go to profile settings and add Hunter.io and haveibeenpwned.com tokens:
After having vulnerabilities in the database, we will go to the audited client and we will register a client along with their web page, once registered we can go to customer details and we can see the following information:
Once we have the company that we are going to audit registered in the database, we will create a report, adding the date, name of the report and the company to which will be audited. When we register the report, we will give it edit and then we will select the vulnerabilities that we want to appear in the report:
Finally, we will generate the report by clicking on the "overview report" button, and later we will save the page that is generated as ".mht", then we will open it with Word to be able to work on the generated report:
kubeaudit
is a command line tool and a Go package to audit Kubernetes clusters for various different security concerns, such as:
tldr. kubeaudit
makes sure you deploy secure containers!
To use kubeaudit as a Go package, see the package docs.
The rest of this README will focus on how to use kubeaudit as a command line tool.
brew install kubeaudit
Kubeaudit has official releases that are blessed and stable: Official releases
Master may have newer features than the stable releases. If you need a newer feature not yet included in a release, make sure you're using Go 1.17+ and run the following:
go get -v github.com/Shopify/kubeaudit
Start using kubeaudit
with the Quick Start or view all the supported commands.
Prerequisite: kubectl v1.12.0 or later
With kubectl v1.12.0 introducing easy pluggability of external functions, kubeaudit can be invoked as kubectl audit
by
make plugin
and having $GOPATH/bin
available in your path.or
kubectl-audit
and having it available in your path.We also release a Docker image: shopify/kubeaudit
. To run kubeaudit as a job in your cluster see Running kubeaudit in a cluster.
kubeaudit has three modes:
If a Kubernetes manifest file is provided using the -f/--manifest
flag, kubeaudit will audit the manifest file.
Example command:
kubeaudit all -f "/path/to/manifest.yml"
Example output:
$ kubeaudit all -f "internal/test/fixtures/all_resources/deployment-apps-v1.yml"
---------------- Results for ---------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
namespace: deployment-apps-v1
--------------------------------------------
-- [error] AppArmorAnnotationMissing
Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/container' should be added.
Metadata:
Container: container
MissingAnnotation: container.apparmor.security.beta.kubernetes.io/container
-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
-- [error] CapabilityShouldDropAll
Message: Capability not set to ALL. Ideally, you should drop ALL capabilities and add the specific ones you need to the add list.
Metadata:
Container: container
Capability: AUDIT_WRITE
...
If no errors with a given minimum severity are found, the following is returned:
All checks completed. 0 high-risk vulnerabilities found
Manifest mode also supports autofixing all security issues using the autofix
command:
kubeaudit autofix -f "/path/to/manifest.yml"
To write the fixed manifest to a new file instead of modifying the source file, use the -o/--output
flag.
kubeaudit autofix -f "/path/to/manifest.yml" -o "/path/to/fixed"
To fix a manifest based on custom rules specified on a kubeaudit config file, use the -k/--kconfig
flag.
kubeaudit autofix -k "/path/to/kubeaudit-config.yml" -f "/path/to/manifest.yml" -o "/path/to/fixed"
Kubeaudit can detect if it is running within a container in a cluster. If so, it will try to audit all Kubernetes resources in that cluster:
kubeaudit all
Kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config
). A different kubeconfig location can be specified using the --kubeconfig
flag. To specify a context of the kubeconfig, use the -c/--context
flag.
kubeaudit all --kubeconfig "/path/to/config" --context my_cluster
For more information on kubernetes config files, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
Kubeaudit produces results with three levels of severity:
Error
: A security issue or invalid kubernetes configurationWarning
: A best practice recommendationInfo
: Informational, no action required. This includes results that are overridden
The minimum severity level can be set using the --minSeverity/-m
flag.
By default kubeaudit will output results in a human-readable way. If the output is intended to be further processed, it can be set to output JSON using the --format json
flag. To output results as logs (the previous default) use --format logrus
. Some output formats include colors to make results easier to read in a terminal. To disable colors (for example, if you are sending output to a text file), you can use the --no-color
flag.
If there are results of severity level error
, kubeaudit will exit with exit code 2. This can be changed using the --exitcode/-e
flag.
For all the ways kubeaudit can be customized, see Global Flags.
Command | Description | Documentation |
---|---|---|
all | Runs all available auditors, or those specified using a kubeaudit config. | docs |
autofix | Automatically fixes security issues. | docs |
version | Prints the current kubeaudit version. |
Auditors can also be run individually.
Command | Description | Documentation |
---|---|---|
apparmor | Finds containers running without AppArmor. | docs |
asat | Finds pods using an automatically mounted default service account | docs |
capabilities | Finds containers that do not drop the recommended capabilities or add new ones. | docs |
deprecatedapis | Finds any resource defined with a deprecated API version. | docs |
hostns | Finds containers that have HostPID, HostIPC or HostNetwork enabled. | docs |
image | Finds containers which do not use the desired version of an image (via the tag) or use an image without a tag. | docs |
limits | Finds containers which exceed the specified CPU and memory limits or do not specify any. | docs |
mounts | Finds containers that have sensitive host paths mounted. | docs |
netpols | Finds namespaces that do not have a default-deny network policy. | docs |
nonroot | Finds containers running as root. | docs |
privesc | Finds containers that allow privilege escalation. | docs |
privileged | Finds containers running as privileged. | docs |
rootfs | Finds containers which do not have a read-only filesystem. | docs |
seccomp | Finds containers running without Seccomp. | docs |
Short | Long | Description |
---|---|---|
--format | The output format to use (one of "pretty", "logrus", "json") (default is "pretty") | |
--kubeconfig | Path to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config ) | |
-c | --context | The name of the kubeconfig context to use |
-f | --manifest | Path to the yaml configuration to audit. Only used in manifest mode. You may use - to read from stdin. |
-n | --namespace | Only audit resources in the specified namespace. Not currently supported in manifest mode. |
-g | --includegenerated | Include generated resources in scan (such as Pods generated by deployments). If you would like kubeaudit to produce results for generated resources (for example if you have custom resources or want to catch orphaned resources where the owner resource no longer exists) you can use this flag. |
-m | --minseverity | Set the lowest severity level to report (one of "error", "warning", "info") (default is "info") |
-e | --exitcode | Exit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default is 2) |
--no-color | Don't use colors in the output (default is false) |
The kubeaudit config can be used for two things:
Any configuration that can be specified using flags for the individual auditors can be represented using the config.
The config has the following format:
enabledAuditors:
# Auditors are enabled by default if they are not explicitly set to "false"
apparmor: false
asat: false
capabilities: true
deprecatedapis: true
hostns: true
image: true
limits: true
mounts: true
netpols: true
nonroot: true
privesc: true
privileged: true
rootfs: true
seccomp: true
auditors:
capabilities:
# add capabilities needed to the add list, so kubeaudit won't report errors
allowAddList: ['AUDIT_WRITE', 'CHOWN']
deprecatedapis:
# If no versions are specified and the'deprecatedapis' auditor is enabled, WARN
# results will be genereted for the resources defined with a deprecated API.
currentVersion: '1.22'
targetedVersion: '1.25'
image:
# If no image is specified and the 'image' auditor is enabled, WARN results
# will be generated for containers which use an ima ge without a tag
image: 'myimage:mytag'
limits:
# If no limits are specified and the 'limits' auditor is enabled, WARN results
# will be generated for containers which have no cpu or memory limits specified
cpu: '750m'
memory: '500m'
For more details about each auditor, including a description of the auditor-specific configuration in the config, see the Auditor Docs.
Note: The kubeaudit config is not the same as the kubeconfig file specified with the --kubeconfig
flag, which refers to the Kubernetes config file (see Local Mode). Also note that only the all
and autofix
commands support using a kubeaudit config. It will not work with other commands.
Note: If flags are used in combination with the config file, flags will take precedence.
Security issues can be ignored for specific containers or pods by adding override labels. This means the auditor will produce info
results instead of error
results and the audit result name will have Allowed
appended to it. The labels are documented in each auditor's documentation, but the general format for auditors that support overrides is as follows:
An override label consists of a key
and a value
.
The key
is a combination of the override type (container or pod) and an override identifier
which is unique to each auditor (see the docs for the specific auditor). The key
can take one of two forms depending on the override type:
container.audit.kubernetes.io/[container name].[override identifier]
audit.kubernetes.io/pod.[override identifier]
If the value
is set to a non-empty string, it will be displayed in the info
result as the OverrideReason
:
$ kubeaudit asat -f "auditors/asat/fixtures/service-account-token-true-allowed.yml"
---------------- Results for ---------------
apiVersion: v1
kind: ReplicationController
metadata:
name: replicationcontroller
namespace: service-account-token-true-allowed
--------------------------------------------
-- [info] AutomountServiceAccountTokenTrueAndDefaultSAAllowed
Message: Audit result overridden: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
Metadata:
OverrideReason: SomeReason
As per Kubernetes spec, value
must be 63 characters or less and must be empty or begin and end with an alphanumeric character ([a-z0-9A-Z]
) with dashes (-
), underscores (_
), dots (.
), and alphanumerics between.
Multiple override labels (for multiple auditors) can be added to the same resource.
See the specific auditor docs for the auditor you wish to override for examples.
To learn more about labels, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
If you'd like to fix a bug, contribute a feature or just correct a typo, please feel free to do so as long as you follow our Code of Conduct.
go get github.com/Shopify/kubeaudit
cd $GOPATH/src/github.com/Shopify/kubeaudit
git remote add fork https://github.com/you-are-awesome/kubeaudit
git checkout -b awesome-new-feature
make test
(to run tests without Kind: USE_KIND=false make test
)git commit -am 'Adds awesome feature'
git push fork
Note that if you didn't sign the CLA before opening your PR, you can re-run the check by adding a comment to the PR that says "I've signed the CLA!"!
Introduction In the previous article, we understood how print functions like printf work. This article provides further definition of Format String vulnerabilities. We will begin by discussing how Format Strings can be used in an unusual way, which is a starting point to understanding Format String exploits. Next, we will understand what kind of mistakes […]
The post Format String Vulnerabilities: Use and Definitions appeared first on Infosec Resources.
Introduction In the previous articles, we discussed printing functions, format strings and format string vulnerabilities. This article provides an overview of how Format String vulnerabilities can be exploited. In this article, we will begin by solving a simple challenge to leak a secret from memory. In the next article, we will discuss another example, where […]
The post How to exploit Format String Vulnerabilities appeared first on Infosec Resources.
In July 2015, I did my first threat webinar. I had planned to do it on a monthly basis, and never imagined I would still be doing it five years later, but here I am, still creating monthly webinars. I still do. I started the webinar series to help people understand the different threats targeting our customers and I have always tried to focus on three areas:
|
|
This last point, discussing technologies versus solutions, has been one of the key items I try to follow as much as possible – after all, the goal of my webinars is to be educational, not a sales pitch.
Coming from a technical background, BS in Electrical Engineering from Michigan State University (Go Spartans!!), I enjoy learning about the new technologies being used to detect the latest threats and to ensure you know what to look for when selecting a vendor and/or a security solution. Over the years, I’ve discussed everything from APTs, coinminers, exploits, messaging threats, ransomware, underground activity and lots in between. It is pretty easy to find topics to discuss, as there is so much going on in our industry, and with the malicious actors regularly shifting their tactics, techniques and procedures, I can keep the content fairly fresh.
I really enjoy having guest speakers on my webinars to mix things up a bit for the viewers as well, as I know my limitations – there are just too many threats out there to keep up with all of them. The main reason I love doing the threat webinars is that I enjoy sharing information and teaching others about our industry and the threats affecting them. If you want to check out any of my previous five years of webinars you can watch them here.
For my fifth year anniversary I wanted to try something different and I would like to do an open Q&A session. As I’ve never done this before, it will certainly be an interesting experience for me, but hopefully for you as well. I hope I can answer a majority of your questions, but I know some of you are way too smart for me, so please bear with me.
Our registration page for this webinar allows you to submit any pre-session questions that I’ll answer throughout the webinar. You can ask me anything that is on your mind and if I cannot get to your question, I’ll do my best to answer you afterwards in an email.
I hope to continue to do these webinars for the foreseeable future and I would like to end my post by thanking each and every one of you who has participated in my webinars over the years. It has been a pleasure, and I look forward to answering your questions.
Take care, stay healthy, and keep on smiling!
Jon
The post Ask Me Anything – Celebrating The Fifth Anniversary Of My Monthly Threat Webinar appeared first on .
Risk decisions are the foundation of information security. Sadly, they are also one of the most often misunderstood parts of information security.
This is bad enough on its own but can sink any effort at education as an organization moves towards a DevOps philosophy.
To properly evaluate the risk of an event, two components are required:
Unfortunately, teams—and humans in general—are reasonably good at the first part and unreasonably bad at the second.
This is a problem.
It’s a problem that is amplified when security starts to integration with teams in a DevOps environment. Originally presented as part of AllTheTalks.online, this talk examines the ins and outs of risk decisions and how we can start to work on improving how our teams handle them.
The post Risk Decisions in an Imperfect World appeared first on .
So much for a quiet January! By now you must have heard about the new Microsoft® vulnerability CVE-2020-0601, first disclosed by the NSA (making it the first Windows bug publicly attributed to the National Security Agency). This vulnerability is found in a cryptographic component that has a range of functions—an important one being the ability to digitally sign software, which certifies that the software has not been tampered with. Using this vulnerability, attackers can sign malicious executables to make them look legitimate, leading to potentially disastrous man-in-the-middle attacks.
Here’s the good news. Microsoft has already released a patch to protect against any exploits stemming from this vulnerability. But here’s the catch: You have to patch!
While Trend Micro offers industry-leading virtual patching capabilities via our endpoint, cloud, and network security solutions, the best protection against vulnerabilities is to deploy a real patch from the software vendor. Let me say it again for effect – the best protection against this very serious vulnerability is to ensure the affected systems are patched with Microsoft’s latest security update.
We understand how difficult it can be to patch systems in a timely manner, so we created a valuable tool that will test your endpoints to see if whether they have been patched against this latest threat or if they are still vulnerable. Additionally, to ensure you are protected against any potential threats, we have just released additional layers of protection in the form of IPS rules for Trend Micro Deep Security and Trend Micro Vulnerability Protection
(including Trend Micro Apex One
). This was rolled out to help organizations strengthen their overall security posture and provide some protection during lengthy patching processes.
You can download our Trend Micro Vulnerability Assessment Tool right now to see if you are protected against the latest Microsoft vulnerability. And while you’re at it, check out our latest Knowledge Based Article for additional information on this new vulnerability along with Trend Micro security capabilities that help protect customers like you 24/7. Even during those quiet days in January.
The post Don’t Let the Vulnera-Bullies Win. Use our free tool to see if you are patched against Vulnerability CVE-2020-0601 appeared first on .