FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Infrastructure Laundering: Blending in with the Cloud

Image: Shutterstock, ArtHead.

In an effort to blend in and make their malicious traffic tougher to block, hosting firms catering to cybercriminals in China and Russia increasingly are funneling their operations through major U.S. cloud providers. Research published this week on one such outfit — a sprawling network tied to Chinese organized crime gangs and aptly named “Funnull” — highlights a persistent whac-a-mole problem facing cloud services.

In October 2024, the security firm Silent Push published a lengthy analysis of how Amazon AWS and Microsoft Azure were providing services to Funnull, a two-year-old Chinese content delivery network that hosts a wide variety of fake trading apps, pig butchering scams, gambling websites, and retail phishing pages.

Funnull made headlines last summer after it acquired the domain name polyfill[.]io, previously the home of a widely-used open source code library that allowed older browsers to handle advanced functions that weren’t natively supported. There were still tens of thousands of legitimate domains linking to the Polyfill domain at the time of its acquisition, and Funnull soon after conducted a supply-chain attack that redirected visitors to malicious sites.

Silent Push’s October 2024 report found a vast number of domains hosted via Funnull promoting gambling sites that bear the logo of the Suncity Group, a Chinese entity named in a 2024 UN report (PDF) for laundering millions of dollars for the North Korean Lazarus Group.

In 2023, Suncity’s CEO was sentenced to 18 years in prison on charges of fraud, illegal gambling, and “triad offenses,” i.e. working with Chinese transnational organized crime syndicates. Suncity is alleged to have built an underground banking system that laundered billions of dollars for criminals.

It is likely the gambling sites coming through Funnull are abusing top casino brands as part of their money laundering schemes. In reporting on Silent Push’s October report, TechCrunch obtained a comment from Bwin, one of the casinos being advertised en masse through Funnull, and Bwin said those websites did not belong to them.

Gambling is illegal in China except in Macau, a special administrative region of China. Silent Push researchers say Funnull may be helping online gamblers in China evade the Communist party’s “Great Firewall,” which blocks access to gambling destinations.

Silent Push’s Zach Edwards said that upon revisiting Funnull’s infrastructure again this month, they found dozens of the same Amazon and Microsoft cloud Internet addresses still forwarding Funnull traffic through a dizzying chain of auto-generated domain names before redirecting malicious or phishous websites.

Edwards said Funnull is a textbook example of an increasing trend Silent Push calls “infrastructure laundering,” wherein crooks selling cybercrime services will relay some or all of their malicious traffic through U.S. cloud providers.

“It’s crucial for global hosting companies based in the West to wake up to the fact that extremely low quality and suspicious web hosts based out of China are deliberately renting IP space from multiple companies and then mapping those IPs to their criminal client websites,” Edwards told KrebsOnSecurity. “We need these major hosts to create internal policies so that if they are renting IP space to one entity, who further rents it to host numerous criminal websites, all of those IPs should be reclaimed and the CDN who purchased them should be banned from future IP rentals or purchases.”

A Suncity gambling site promoted via Funnull. The sites feature a prompt for a Tether/USDT deposit program.

Reached for comment, Amazon referred this reporter to a statement Silent Push included in a report released today. Amazon said AWS was already aware of the Funnull addresses tracked by Silent Push, and that it had suspended all known accounts linked to the activity.

Amazon said that contrary to implications in the Silent Push report, it has every reason to aggressively police its network against this activity, noting the accounts tied to Funnull used “fraudulent methods to temporarily acquire infrastructure, for which it never pays. Thus, AWS incurs damages as a result of the abusive activity.”

“When AWS’s automated or manual systems detect potential abuse, or when we receive reports of potential abuse, we act quickly to investigate and take action to stop any prohibited activity,” Amazon’s statement continues. “In the event anyone suspects that AWS resources are being used for abusive activity, we encourage them to report it to AWS Trust & Safety using the report abuse form. In this case, the authors of the report never notified AWS of the findings of their research via our easy-to-find security and abuse reporting channels. Instead, AWS first learned of their research from a journalist to whom the researchers had provided a draft.”

Microsoft likewise said it takes such abuse seriously, and encouraged others to report suspicious activity found on its network.

“We are committed to protecting our customers against this kind of activity and actively enforce acceptable use policies when violations are detected,” Microsoft said in a written statement. “We encourage reporting suspicious activity to Microsoft so we can investigate and take appropriate actions.”

Richard Hummel is threat intelligence lead at NETSCOUT. Hummel said it used to be that “noisy” and frequently disruptive malicious traffic — such as automated application layer attacks, and “brute force” efforts to crack passwords or find vulnerabilities in websites — came mostly from botnets, or large collections of hacked devices.

But he said the vast majority of the infrastructure used to funnel this type of traffic is now proxied through major cloud providers, which can make it difficult for organizations to block at the network level.

“From a defenders point of view, you can’t wholesale block cloud providers, because a single IP can host thousands or tens of thousands of domains,” Hummel said.

In May 2024, KrebsOnSecurity published a deep dive on Stark Industries Solutions, an ISP that materialized at the start of Russia’s invasion of Ukraine and has been used as a global proxy network that conceals the true source of cyberattacks and disinformation campaigns against enemies of Russia. Experts said much of the malicious traffic  traversing Stark’s network (e.g. vulnerability scanning and password brute force attacks) was being bounced through U.S.-based cloud providers.

Stark’s network has been a favorite of the Russian hacktivist group called NoName057(16), which frequently launches huge distributed denial-of-service (DDoS) attacks against a variety of targets seen as opposed to Moscow. Hummel said NoName’s history suggests they are adept at cycling through new cloud provider accounts, making anti-abuse efforts into a game of whac-a-mole.

“It almost doesn’t matter if the cloud provider is on point and takes it down because the bad guys will just spin up a new one,” he said. “Even if they’re only able to use it for an hour, they’ve already done their damage. It’s a really difficult problem.”

Edwards said Amazon declined to specify whether the banned Funnull users were operating using compromised accounts or stolen payment card data, or something else.

“I’m surprised they wanted to lean into ‘We’ve caught this 1,200+ times and have taken these down!’ and yet didn’t connect that each of those IPs was mapped to [the same] Chinese CDN,” he said. “We’re just thankful Amazon confirmed that account mules are being used for this and it isn’t some front-door relationship. We haven’t heard the same thing from Microsoft but it’s very likely that the same thing is happening.”

Funnull wasn’t always a bulletproof hosting network for scam sites. Prior to 2022, the network was known as Anjie CDN, based in the Philippines. One of Anjie’s properties was a website called funnull[.]app. Loading that domain reveals a pop-up message by the original Anjie CDN owner, who said their operations had been seized by an entity known as Fangneng CDN and ACB Group, the parent company of Funnull.

A machine-translated message from the former owner of Anjie CDN, a Chinese content delivery network that is now Funnull.

“After I got into trouble, the company was managed by my family,” the message explains. “Because my family was isolated and helpless, they were persuaded by villains to sell the company. Recently, many companies have contacted my family and threatened them, believing that Fangneng CDN used penetration and mirroring technology through customer domain names to steal member information and financial transactions, and stole customer programs by renting and selling servers. This matter has nothing to do with me and my family. Please contact Fangneng CDN to resolve it.”

In January 2024, the U.S. Department of Commerce issued a proposed rule that would require cloud providers to create a “Customer Identification Program” that includes procedures to collect data sufficient to determine whether each potential customer is a foreign or U.S. person.

According to the law firm Crowell & Moring LLP, the Commerce rule also would require “infrastructure as a service” (IaaS) providers to report knowledge of any transactions with foreign persons that might allow the foreign entity to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.

“The proposed rulemaking has garnered global attention, as its cross-border data collection requirements are unprecedented in the cloud computing space,” Crowell wrote. “To the extent the U.S. alone imposes these requirements, there is concern that U.S. IaaS providers could face a competitive disadvantage, as U.S. allies have not yet announced similar foreign customer identification requirements.”

It remains unclear if the new White House administration will push forward with the requirements. The Commerce action was mandated as part of an executive order President Trump issued a day before leaving office in January 2021.

MasterCard DNS Error Went Unnoticed for Years

The payment card giant MasterCard just fixed a glaring error in its domain name server settings that could have allowed anyone to intercept or divert Internet traffic for the company by registering an unused domain name. The misconfiguration persisted for nearly five years until a security researcher spent $300 to register the domain and prevent it from being grabbed by cybercriminals.

A DNS lookup on the domain az.mastercard.com on Jan. 14, 2025 shows the mistyped domain name a22-65.akam.ne.

From June 30, 2020 until January 14, 2025, one of the core Internet servers that MasterCard uses to direct traffic for portions of the mastercard.com network was misnamed. MasterCard.com relies on five shared Domain Name System (DNS) servers at the Internet infrastructure provider Akamai [DNS acts as a kind of Internet phone book, by translating website names to numeric Internet addresses that are easier for computers to manage].

All of the Akamai DNS server names that MasterCard uses are supposed to end in “akam.net” but one of them was misconfigured to rely on the domain “akam.ne.”

This tiny but potentially critical typo was discovered recently by Philippe Caturegli, founder of the security consultancy Seralys. Caturegli said he guessed that nobody had yet registered the domain akam.ne, which is under the purview of the top-level domain authority for the West Africa nation of Niger.

Caturegli said it took $300 and nearly three months of waiting to secure the domain with the registry in Niger. After enabling a DNS server on akam.ne, he noticed hundreds of thousands of DNS requests hitting his server each day from locations around the globe. Apparently, MasterCard wasn’t the only organization that had fat-fingered a DNS entry to include “akam.ne,” but they were by far the largest.

Had he enabled an email server on his new domain akam.ne, Caturegli likely would have received wayward emails directed toward mastercard.com or other affected domains. If he’d abused his access, he probably could have obtained website encryption certificates (SSL/TLS certs) that were authorized to accept and relay web traffic for affected websites. He may even have been able to passively receive Microsoft Windows authentication credentials from employee computers at affected companies.

But the researcher said he didn’t attempt to do any of that. Instead, he alerted MasterCard that the domain was theirs if they wanted it, copying this author on his notifications. A few hours later, MasterCard acknowledged the mistake, but said there was never any real threat to the security of its operations.

“We have looked into the matter and there was not a risk to our systems,” a MasterCard spokesperson wrote. “This typo has now been corrected.”

Meanwhile, Caturegli received a request submitted through Bugcrowd, a program that offers financial rewards and recognition to security researchers who find flaws and work privately with the affected vendor to fix them. The message suggested his public disclosure of the MasterCard DNS error via a post on LinkedIn (after he’d secured the akam.ne domain) was not aligned with ethical security practices, and passed on a request from MasterCard to have the post removed.

MasterCard’s request to Caturegli, a.k.a. “Titon” on infosec.exchange.

Caturegli said while he does have an account on Bugcrowd, he has never submitted anything through the Bugcrowd program, and that he reported this issue directly to MasterCard.

“I did not disclose this issue through Bugcrowd,” Caturegli wrote in reply. “Before making any public disclosure, I ensured that the affected domain was registered to prevent exploitation, mitigating any risk to MasterCard or its customers. This action, which we took at our own expense, demonstrates our commitment to ethical security practices and responsible disclosure.”

Most organizations have at least two authoritative domain name servers, but some handle so many DNS requests that they need to spread the load over additional DNS server domains. In MasterCard’s case, that number is five, so it stands to reason that if an attacker managed to seize control over just one of those domains they would only be able to see about one-fifth of the overall DNS requests coming in.

But Caturegli said the reality is that many Internet users are relying at least to some degree on public traffic forwarders or DNS resolvers like Cloudflare and Google.

“So all we need is for one of these resolvers to query our name server and cache the result,” Caturegli said. By setting their DNS server records with a long TTL or “Time To Live” — a setting that can adjust the lifespan of data packets on a network — an attacker’s poisoned instructions for the target domain can be propagated by large cloud providers.

“With a long TTL, we may reroute a LOT more than just 1/5 of the traffic,” he said.

The researcher said he’d hoped that the credit card giant might thank him, or at least offer to cover the cost of buying the domain.

“We obviously disagree with this assessment,” Caturegli wrote in a follow-up post on LinkedIn regarding MasterCard’s public statement. “But we’ll let you judge— here are some of the DNS lookups we recorded before reporting the issue.”

Caturegli posted this screenshot of MasterCard domains that were potentially at risk from the misconfigured domain.

As the screenshot above shows, the misconfigured DNS server Caturegli found involved the MasterCard subdomain az.mastercard.com. It is not clear exactly how this subdomain is used by MasterCard, however their naming conventions suggest the domains correspond to production servers at Microsoft’s Azure cloud service. Caturegli said the domains all resolve to Internet addresses at Microsoft.

“Don’t be like Mastercard,” Caturegli concluded in his LinkedIn post. “Don’t dismiss risk, and don’t let your marketing team handle security disclosures.”

One final note: The domain akam.ne has been registered previously — in December 2016 by someone using the email address um-i-delo@yandex.ru. The Russian search giant Yandex reports this user account belongs to an “Ivan I.” from Moscow. Passive DNS records from DomainTools.com show that between 2016 and 2018 the domain was connected to an Internet server in Germany, and that the domain was left to expire in 2018.

This is interesting given a comment on Caturegli’s LinkedIn post from an ex-Cloudflare employee who linked to a report he co-authored on a similar typo domain apparently registered in 2017 for organizations that may have mistyped their AWS DNS server as “awsdns-06.ne” instead of “awsdns-06.net.” DomainTools reports that this typo domain also was registered to a Yandex user (playlotto@yandex.ru), and was hosted at the same German ISP — Team Internet (AS61969).

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

I've spent more than a decade now writing about how to make Have I Been Pwned (HIBP) fast. Really fast. Fast to the extent that sometimes, it was even too fast:

The response from each search was coming back so quickly that the user wasn’t sure if it was legitimately checking subsequent addresses they entered or if there was a glitch.

Over the years, the service has evolved to use emerging new techniques to not just make things fast, but make them scale more under load, increase availability and sometimes, even drive down cost. For example, 8 years ago now I started rolling the most important services to Azure Functions, "serverless" code that was no longer bound by logical machines and would just scale out to whatever volume of requests was thrown at it. And just last year, I turned on Cloudflare cache reserve to ensure that all cachable objects remained cached, even under conditions where they previously would have been evicted.

And now, the pièce de résistance, the coolest performance thing we've done to date (and it is now "we", thank you Stefán): just caching the whole lot at Cloudflare. Everything. Every search you do... almost. Let me explain, firstly by way of some background:

When you hit any of the services on HIBP, the first place the traffic goes from your browser is to one of Cloudflare's 330 "edge nodes":

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

As I sit here writing this on the Gold Coast on Australia's most eastern seaboard, any request I make to HIBP hits that edge node on the far right of the Aussie continent which is just up the road in Brisbane. The capital city of our great state of Queensland is just a short jet ski away, about 80km as the crow flies. Before now, every single time I searched HIBP from home, my request bytes would travel up the wire to Brisbane and then take a giant 12,000km trip to Seattle where the Azure Function in the West US Azure data would query the database before sending the response 12,000km back west to Cloudflare's edge node, then the final 80km down to my Surfers Paradise home. But what if it didn't have to be that way? What if that data was already sitting on the Cloudflare edge node in Brisbane? And the one in Paris, and the one in well, I'm not even sure where all those blue dots are, but what if it was everywhere? Several awesome things would happen:

  1. You'd get your response much faster as we've just shaved off more than 99% of the distance the bytes need to travel.
  2. The availability would massively improve as there are far fewer nodes for the traffic to traverse through, plus when a response is cached, we're no longer dependent on the Azure Function or underlying storage mechanism.
  3. We'd save on Azure Function execution costs, storage account hits and especially egress bandwidth (which is very expensive).

In short, pushing data and processing "closer to the edge" benefits both our customers and ourselves. But how do you do that for 5 billion unique email addresses? (Note: As of today, HIBP reports over 14 billion breached accounts, the number of unique email addresses is lower as on average, each breached address has appeared in multiple breaches.) To answer this question, let's recap on how the data is queried:

  1. Via the front page of the website. This hits a "unified search" API which accepts an email address and uses Cloudflare's Turnstile to prohibit automated requests not originating from the browser.
  2. Via the public API. This endpoint also takes an email address as input and then returns all breaches it appears in.
  3. Via the k-anonyity enterprise API. This endpoint is used by a handful of large subscribers such as Mozilla and 1Password. Instead of searching by email address, it implements k-anonymity and searches by hash prefix.

Let's delve into that last point further because it's the secret sauce to how this whole caching model works. In order to provide subscribers of this service with complete anonymity over the email addresses being searched for, the only data passed to the API is the first six characters of the SHA-1 hash of the full email address. If this sounds odd, read the blog post linked to in that last bullet point for full details. The important thing for now, though, is that it means there are a total of 16^6 different possible requests that can be made to the API, which is just over 16 million. Further, we can transform the first two use cases above into k-anonymity searches on the server side as it simply involved hashing the email address and taking those first six characters.

In summary, this means we can boil the entire searchable database of email addresses down to the following:

  1. AAAAAA
  2. AAAAAB
  3. AAAAAC
  4. ...about 16 million other values...
  5. FFFFFD
  6. FFFFFE
  7. FFFFFF

That's a large albeit finite list, and that's what we're now caching. So, here's what a search via email address looks like:

  1. Address to search: test@example.com
  2. Full SHA-1 hash: 567159D622FFBB50B11B0EFD307BE358624A26EE
  3. Six char prefix: 567159
  4. API endpoint: https://[host]/[path]/567159
  5. If hash prefix is cached, retrieve result from there
  6. If hash prefix is not cached, query origin and save to cache
  7. Return result to client

K-anonymity searches obviously go straight to step four, skipping the first few steps as we already know the hash prefix. All of this happens in a Cloudflare worker, so it's "code on the edge" creating hashes, checking cache then retrieving from the origin where necessary. That code also takes care of handling parameters that transform queries, for example, filtering by domain or truncating the response. It's a beautiful, simple model that's all self-contained within a worker and a very simple origin API. But there's a catch - what happens when the data changes?

There are two events that can change cached data, one is simple and one is major:

  1. Someone opts out of public searchability and their email address needs to be removed. That's easy, we just call an API at Cloudflare and flush a single hash prefix.
  2. A new data breach is loaded and there are changes to a large number of hash prefixes. In this scenario, we flush the entire cache and start populating it again from scratch.

The second point is kind of frustrating as we've built up this beautiful collection of data all sitting close to the consumer where it's super fast to query, and then we nuke it all and go from scratch. The problem is it's either that or we selectively purge what could be many millions of individual hash prefixes, which you can't do:

For Zones on Enterprise plan, you may purge up to 500 URLs in one API call.

And:

Cache-Tag, host, and prefix purging each have a rate limit of 30,000 purge API calls in every 24 hour period.

We're giving all this further thought, but it's a non-trivial problem and a full cache flush is both easy and (near) instantaneous.

Enough words, let's get to some pictures! Here's a typical week of queries to the enterprise k-anonymity API:

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

This is a very predictable pattern, largely due to one particular subscriber regularly querying their entire customer base each day. (Sidenote: most of our enterprise level subscribers use callbacks such that we push updates to them via webhook when a new breach impacts their customers.) That's the total volume of inbound requests, but the really interesting bit is the requests that hit the origin (blue) versus those served directly by Cloudflare (orange):

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

Let's take the lowest blue data point towards the end of the graph as an example:

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

At that time, 96% of requests were served from Cloudflare's edge. Awesome! But look at it only a little bit later:

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

That's when I flushed cache for the Finsure breach, and 100% of traffic started being directed to the origin. (We're still seeing 14.24k hits via Cloudflare as, inevitably, some requests in that 1-hour block were to the same hash range and were served from cache.) It then took a whole 20 hours for the cache to repopulate to the extent that the hit:miss ratio returned to about 50:50:

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

Look back towards the start of the graph and you can see the same pattern from when I loaded the DemandScience breach. This all does pretty funky things to our origin API:

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

That last sudden increase is more than a 30x traffic increase in an instant! If we hadn't been careful about how we managed the origin infrastructure, we would have built a literal DDoS machine. Stefán will write later about how we manage the underlying database to ensure this doesn't happen, but even still, whilst we're dealing with the cyclical support patterns seen in that first graph above, I know that the best time to load a breach is later in the Aussie afternoon when the traffic is a third of what it is first thing in the morning. This helps smooth out the rate of requests to the origin such that by the time the traffic is ramping up, more of the content can be returned directly from Cloudflare. You can see that in the graphs above; that big peaky block towards the end of the last graph is pretty steady, even though the inbound traffic the first graph over the same period of time increases quite significantly. It's like we're trying to race the increasing inbound traffic by building ourselves up a bugger in cache.

Here's another angle to this whole thing: now more than ever, loading a data breach costs us money. For example, by the end of the graphs above, we were cruising along at a 50% cache hit ratio, which meant we were only paying for half as many of the Azure Function executions, egress bandwidth, and underlying SQL database as we would have been otherwise. Flushing cache and suddenly sending all the traffic to the origin doubles our cost. Waiting until we're back at 90% cache it ratio literally increases those costs 10x when we flush. If I were to be completely financially ruthless about it, I would need to either load fewer breaches or bulk them together such that a cache flush is only ejecting a small amount of data anyway, but clearly, that's not what I've been doing 😄

There's just one remaining fly in the ointment...

Of those three methods of querying email addresses, the first is a no-brainer: searches from the front page of the website hit a Cloudflare Worker where it validates the Turnstile token and returns a result. Easy. However, the second two models (the public and enterprise APIs) have the added burden of validating the API key against Azure API Management (APIM), and the only place that exists is in the West US origin service. What this means for those endpoints is that before we can return search results from a location that may be just a short jet ski ride away, we need to go all the way to the other side of the world to validate the key and ensure the request is within the rate limit. We do this in the lightest possible way with barely any data transiting the request to check the key, plus we do it in async with pulling the data back from the origin service if it isn't already in cache. In other words, we're as efficient as humanly possible, but we still cop a massive latency burden.

Doing API management at the origin is super frustrating, but there are really only two alternatives. The first is to distribute our APIM instance to other Azure data centres, and the problem with that is we need a Premium instance of the product. We presently run on a Basic instance, which means we're talking about a 19x increase in price just to unlock that ability. But that's just to go Premium; we then need at least one more instance somewhere else for this to make sense, which means we're talking about a 28x increase. And every region we add amplifies that even further. It's a financial non-starter.

The second option is for Cloudflare to build an API management product. This is the killer piece of this puzzle, as it would put all the checks and balances within the one edge node. It's a suggestion I've put forward on many occasions now, and who knows, maybe it's already in the works, but it's a suggestion I make out of a love of what the company does and a desire to go all-in on having them control the flow of our traffic. I did get a suggestion this week about rolling what is effectively a "poor man's API management" within workers, and it's a really cool suggestion, but it gets hard when people change plans or when we want to apply quotas to APIs rather than rate limits. So c'mon Cloudflare, let's make this happen!

Finally, just one more stat on how powerful serving content directly from the edge is: I shared this stat last month for Pwned Passwords which serves well over 99% of requests from Cloudflare's cache reserve:

There it is - we’ve now passed 10,000,000,000 requests to Pwned Password in 30 days 😮 This is made possible with @Cloudflare’s support, massively edge caching the data to make it super fast and highly available for everyone. pic.twitter.com/kw3C9gsHmB

— Troy Hunt (@troyhunt) October 5, 2024

That's about 3,900 requests per second, on average, non-stop for 30 days. It's obviously way more than that at peak; just a quick glance through the last month and it looks like about 17k requests per second in a one-minute period a few weeks ago:

Closer to the Edge: Hyperscaling Have I Been Pwned with Cloudflare Workers and Caching

But it doesn't matter how high it is, because I never even think about it. I set up the worker, I turned on cache reserve, and that's it 😎

I hope you've enjoyed this post, Stefán and I will be doing a live stream on this topic at 06:00 AEST Friday morning for this week's regular video update, and it'll be available for replay immediately after. It's also embedded here for convenience:

Patch Tuesday, October 2024 Edition

Microsoft today released security updates to fix at least 117 security holes in Windows computers and other software, including two vulnerabilities that are already seeing active attacks. Also, Adobe plugged 52 security holes across a range of products, and Apple has addressed a bug in its new macOS 15Sequoia” update that broke many cybersecurity tools.

One of the zero-day flaws — CVE-2024-43573 — stems from a security weakness in MSHTML, the proprietary engine of Microsoft’s Internet Explorer web browser. If that sounds familiar it’s because this is the fourth MSHTML vulnerability found to be exploited in the wild so far in 2024.

Nikolas Cemerikic, a cybersecurity engineer at Immersive Labs, said the vulnerability allows an attacker to trick users into viewing malicious web content, which could appear legitimate thanks to the way Windows handles certain web elements.

“Once a user is deceived into interacting with this content (typically through phishing attacks), the attacker can potentially gain unauthorized access to sensitive information or manipulate web-based services,” he said.

Cemerikic noted that while Internet Explorer is being retired on many platforms, its underlying MSHTML technology remains active and vulnerable.

“This creates a risk for employees using these older systems as part of their everyday work, especially if they are accessing sensitive data or performing financial transactions online,” he said.

Probably the more serious zero-day this month is CVE-2024-43572, a code execution bug in the Microsoft Management Console, a component of Windows that gives system administrators a way to configure and monitor the system.

Satnam Narang, senior staff research engineer at Tenable, observed that the patch for CVE-2024-43572 arrived a few months after researchers at Elastic Security Labs disclosed an attack technique called GrimResource that leveraged an old cross-site scripting (XSS) vulnerability combined with a specially crafted Microsoft Saved Console (MSC) file to gain code execution privileges.

“Although Microsoft patched a different MMC vulnerability in September (CVE-2024-38259) that was neither exploited in the wild nor publicly disclosed,” Narang said. “Since the discovery of CVE-2024-43572, Microsoft now prevents untrusted MSC files from being opened on a system.”

Microsoft also patched Office, Azure, .NET, OpenSSH for Windows; Power BI; Windows Hyper-V; Windows Mobile Broadband, and Visual Studio. As usual, the SANS Internet Storm Center has a list of all Microsoft patches released today, indexed by severity and exploitability.

Late last month, Apple rolled out macOS 15, an operating system update called Sequoia that broke the functionality of security tools made by a number of vendors, including CrowdStrike, SentinelOne and Microsoft. On Oct. 7, Apple pushed an update to Sequoia users that addresses these compatibility issues.

Finally, Adobe has released security updates to plug a total of 52 vulnerabilities in a range of software, including Adobe Substance 3D Painter, Commerce, Dimension, Animate, Lightroom, InCopy, InDesign, Substance 3D Stager, and Adobe FrameMaker.

Please consider backing up important data before applying any updates. Zero-days aside, there’s generally little harm in waiting a few days to apply any pending patches, because not infrequently a security update introduces stability or compatibility issues. AskWoody.com usually has the skinny on any problematic patches.

And as always, if you run into any glitches after installing patches, leave a note in the comments; chances are someone else is stuck with the same issue and may have even found a solution.

It's Time to Master the Lift & Shift: Migrating from VMware vSphere to Microsoft Azure

While cloud adoption has been top of mind for many IT professionals for nearly a decade, it’s only in recent months, with industry changes and announcements from key players, that many recognize the time to make the move is now. It may feel like a daunting task, but tools exist to help you move your virtual machines (VMs) to a public cloud provider – like Microsoft Azure

April’s Patch Tuesday Brings Record Number of Fixes

If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.

Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.

“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”

Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.

Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.

Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.

Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.

“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”

CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.

“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”

Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.

Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.

“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”

For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.

Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.

KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.

“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.

U.S. Cyber Safety Board Slams Microsoft Over Breach by China-Based Hackers

The U.S. Cyber Safety Review Board (CSRB) has criticized Microsoft for a series of security lapses that led to the breach of nearly two dozen companies across Europe and the U.S. by a China-based nation-state group called Storm-0558 last year. The findings, released by the Department of Homeland Security (DHS) on Tuesday, found that the intrusion was preventable, and that it became successful

Patch Tuesday, March 2024 Edition

Apple and Microsoft recently released software updates to fix dozens of security holes in their operating systems. Microsoft today patched at least 60 vulnerabilities in its Windows OS. Meanwhile, Apple’s new macOS Sonoma addresses at least 68 security weaknesses, and its latest update for iOS fixes two zero-day flaws.

Last week, Apple pushed out an urgent software update to its flagship iOS platform, warning that there were at least two zero-day exploits for vulnerabilities being used in the wild (CVE-2024-23225 and CVE-2024-23296). The security updates are available in iOS 17.4, iPadOS 17.4, and iOS 16.7.6.

Apple’s macOS Sonoma 14.4 Security Update addresses dozens of security issues. Jason Kitka, chief information security officer at Automox, said the vulnerabilities patched in this update often stem from memory safety issues, a concern that has led to a broader industry conversation about the adoption of memory-safe programming languages [full disclosure: Automox is an advertiser on this site].

On Feb. 26, 2024, the Biden administration issued a report that calls for greater adoption of memory-safe programming languages. On Mar. 4, 2024, Google published Secure by Design, which lays out the company’s perspective on memory safety risks.

Mercifully, there do not appear to be any zero-day threats hounding Windows users this month (at least not yet). Satnam Narang, senior staff research engineer at Tenable, notes that of the 60 CVEs in this month’s Patch Tuesday release, only six are considered “more likely to be exploited” according to Microsoft.

Those more likely to be exploited bugs are mostly “elevation of privilege vulnerabilities” including CVE-2024-26182 (Windows Kernel), CVE-2024-26170 (Windows Composite Image File System (CimFS), CVE-2024-21437 (Windows Graphics Component), and CVE-2024-21433 (Windows Print Spooler).

Narang highlighted CVE-2024-21390 as a particularly interesting vulnerability in this month’s Patch Tuesday release, which is an elevation of privilege flaw in Microsoft Authenticator, the software giant’s app for multi-factor authentication. Narang said a prerequisite for an attacker to exploit this flaw is to already have a presence on the device either through malware or a malicious application.

“If a victim has closed and re-opened the Microsoft Authenticator app, an attacker could obtain multi-factor authentication codes and modify or delete accounts from the app,” Narang said. “Having access to a target device is bad enough as they can monitor keystrokes, steal data and redirect users to phishing websites, but if the goal is to remain stealth, they could maintain this access and steal multi-factor authentication codes in order to login to sensitive accounts, steal data or hijack the accounts altogether by changing passwords and replacing the multi-factor authentication device, effectively locking the user out of their accounts.”

CVE-2024-21334 earned a CVSS (danger) score of 9.8 (10 is the worst), and it concerns a weakness in Open Management Infrastructure (OMI), a Linux-based cloud infrastructure in Microsoft Azure. Microsoft says attackers could connect to OMI instances over the Internet without authentication, and then send specially crafted data packets to gain remote code execution on the host device.

CVE-2024-21435 is a CVSS 8.8 vulnerability in Windows OLE, which acts as a kind of backbone for a great deal of communication between applications that people use every day on Windows, said Ben McCarthy, lead cybersecurity engineer at Immersive Labs.

“With this vulnerability, there is an exploit that allows remote code execution, the attacker needs to trick a user into opening a document, this document will exploit the OLE engine to download a malicious DLL to gain code execution on the system,” Breen explained. “The attack complexity has been described as low meaning there is less of a barrier to entry for attackers.”

A full list of the vulnerabilities addressed by Microsoft this month is available at the SANS Internet Storm Center, which breaks down the updates by severity and urgency.

Finally, Adobe today issued security updates that fix dozens of security holes in a wide range of products, including Adobe Experience Manager, Adobe Premiere Pro, ColdFusion 2023 and 2021, Adobe Bridge, Lightroom, and Adobe Animate. Adobe said it is not aware of active exploitation against any of the flaws.

By the way, Adobe recently enrolled all of its Acrobat users into a “new generative AI feature” that scans the contents of your PDFs so that its new “AI Assistant” can  “understand your questions and provide responses based on the content of your PDF file.” Adobe provides instructions on how to disable the AI features and opt out here.

Iran-Linked UNC1549 Hackers Target Middle East Aerospace & Defense Sectors

An Iran-nexus threat actor known as UNC1549 has been attributed with medium confidence to a new set of attacks targeting aerospace, aviation, and defense industries in the Middle East, including Israel and the U.A.E. Other targets of the cyber espionage activity likely include Turkey, India, and Albania, Google-owned Mandiant said in a new analysis. UNC1549 is said to overlap with&nbsp

Microsoft Expands Free Logging Capabilities for all U.S. Federal Agencies

Microsoft has expanded free logging capabilities to all U.S. federal agencies using Microsoft Purview Audit irrespective of the license tier, more than six months after a China-linked cyber espionage campaign targeting two dozen organizations came to light. "Microsoft will automatically enable the logs in customer accounts and increase the default log retention period from 90 days to 180 days,"

Malicious 'SNS Sender' Script Abuses AWS for Bulk Smishing Attacks

A malicious Python script known as SNS Sender is being advertised as a way for threat actors to send bulk smishing messages by abusing Amazon Web Services (AWS) Simple Notification Service (SNS). The SMS phishing messages are designed to propagate malicious links that are designed to capture victims' personally identifiable information (PII) and payment card details, SentinelOne 

Experts Detail New Flaws in Azure HDInsight Spark, Kafka, and Hadoop Services

Three new security vulnerabilities have been discovered in Azure HDInsight's Apache Hadoop, Kafka, and Spark services that could be exploited to achieve privilege escalation and a regular expression denial-of-service (ReDoS) condition. "The new vulnerabilities affect any authenticated user of Azure HDInsight services such as Apache Ambari and Apache Oozie," Orca security

Goblob - A Fast Enumeration Tool For Publicly Exposed Azure Storage Blobs

By: Zion3R


Goblob is a lightweight and fast enumeration tool designed to aid in the discovery of sensitive information exposed publicy in Azure blobs, which can be useful for various research purposes such as vulnerability assessments, penetration testing, and reconnaissance.

Warning. Goblob will issue individual goroutines for each container name to check in each storage account, only limited by the maximum number of concurrent goroutines specified in the -goroutines flag. This implementation can exhaust bandwidth pretty quickly in most cases with the default wordlist, or potentially cost you a lot of money if you're using the tool in a cloud environment. Make sure you understand what you are doing before running the tool.


Installation

go install github.com/Macmod/goblob@latest

Usage

To use goblob simply run the following command:

$ ./goblob <storageaccountname>

Where <storageaccountname> is the target storage account to enumerate public Azure blob storage URLs on.

You can also specify a list of storage account names to check:

$ ./goblob -accounts accounts.txt

By default, the tool will use a list of common Azure Blob Storage container names to construct potential URLs. However, you can also specify a custom list of container names using the -containers option. For example:

$ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt

The tool also supports outputting the results to a file using the -output option:

$ ./goblob -accounts accounts.txt -containers wordlists/goblob-folder-names.txt -output results.txt

If you want to provide accounts to test via stdin you can also omit -accounts (or the account name) entirely:

$ cat accounts.txt | ./goblob

Wordlists

Goblob comes bundled with basic wordlists that can be used with the -containers option:

Optional Flags

Goblob provides several flags that can be tuned in order to improve the enumeration process:

  • -goroutines=N - Maximum number of concurrent goroutines to allow (default: 5000).
  • -blobs=true - Report the URL of each blob instead of the URL of the containers (default: false).
  • -verbose=N - Set verbosity level (default: 1, min: 0, max: 3).
  • -maxpages=N - Maximum of container pages to traverse looking for blobs (default: 20, set to -1 to disable limit or to 0 to avoid listing blobs at all and just check if the container is public)
  • -timeout=N - Timeout for HTTP requests (seconds, default: 90)
  • -maxidleconns=N - MaxIdleConns transport parameter for HTTP client (default: 100)
  • -maxidleconnsperhost=N - MaxIdleConnsPerHost transport parameter for HTTP client (default: 10)
  • -maxconnsperhost=N - MaxConnsPerHost transport parameter for HTTP client (default: 0)
  • -skipssl=true - Skip SSL verification (default: false)
  • -invertsearch=true - Enumerate accounts for each container instead of containers for each account (default: false)

For instance, if you just want to find publicly exposed containers using large lists of storage accounts and container names, you should use -maxpages=0 to prevent the goroutines from paginating the results. Then run it again on the set of results you found with -blobs=true and -maxpages=-1 to actually get the URLs of the blobs.

If, on the other hand, you want to test a small list of very popular container names against a large set of storage accounts, you might want to try -invertsearch=true with -maxpages=0, in order to see the public accounts for each container name instead of the container names for each storage account.

You may also want to try changing -goroutines, -timeout and -maxidleconns, -maxidleconnsperhost and -maxconnsperhost and -skipssl in order to best use your bandwidth and find results faster.

Experiment with the flags to find what works best for you ;-)

Example

A fast enumeration tool for publicly exposed Azure Storage blobs. (6)

Contributing

Contributions are welcome by opening an issue or by submitting a pull request.

TODO

  • Check blob domain for NXDOMAIN before trying wordlist to save bandwidth (maybe)
  • Improve default parameters for better performance

Wordcloud

An interesting visualization of popular container names found in my experiments with the tool:


If you want to know more about my experiments and the subject in general, take a look at my article:



Researchers Uncover Undetectable Crypto Mining Technique on Azure Automation

Cybersecurity researchers have developed what's the first fully undetectable cloud-based cryptocurrency miner leveraging the Microsoft Azure Automation service without racking up any charges. Cybersecurity company SafeBreach said it discovered three different methods to run the miner, including one that can be executed on a victim's environment without attracting any attention. "While this

Microsoft Warns of Cyber Attacks Attempting to Breach Cloud via SQL Server Instance

Microsoft has detailed a new campaign in which attackers unsuccessfully attempted to move laterally to a cloud environment through an SQL Server instance. "The attackers initially exploited a SQL injection vulnerability in an application within the target's environment," security researchers Sunders Bruskin, Hagai Ran Kestenberg, and Fady Nasereldeen said in a Tuesday report. "This allowed the

Researchers Detail 8 Vulnerabilities in Azure HDInsight Analytics Service

By: THN
More details have emerged about a set of now-patched cross-site scripting (XSS) flaws in the Microsoft Azure HDInsight open-source analytics service that could be weaponized by a threat actor to carry out malicious activities. "The identified vulnerabilities consisted of six stored XSS and two reflected XSS vulnerabilities, each of which could be exploited to perform unauthorized actions,

Experts Uncover How Cybercriminals Could Exploit Microsoft Entra ID for Elevated Privilege

By: THN
Cybersecurity researchers have discovered a case of privilege escalation associated with a Microsoft Entra ID (formerly Azure Active Directory) application by taking advantage of an abandoned reply URL. "An attacker could leverage this abandoned URL to redirect authorization codes to themselves, exchanging the ill-gotten authorization codes for access tokens," Secureworks Counter Threat Unit (

Banking Sector Targeted in Open-Source Software Supply Chain Attacks

By: THN
Cybersecurity researchers said they have discovered what they say is the first open-source software supply chain attacks specifically targeting the banking sector. "These attacks showcased advanced techniques, including targeting specific components in web assets of the victim bank by attaching malicious functionalities to it," Checkmarx said in a report published last week. "The attackers

Azure AD Token Forging Technique in Microsoft Attack Extends Beyond Outlook, Wiz Reports

By: THN
The recent attack against Microsoft's email infrastructure by a Chinese nation-state actor referred to as Storm-0558 is said to have a broader scope than previously thought. According to cloud security company Wiz, the inactive Microsoft account (MSA) consumer signing key used to forge Azure Active Directory (Azure AD or AAD) tokens to gain illicit access to Outlook Web Access (OWA) and

Microsoft Bug Allowed Hackers to Breach Over Two Dozen Organizations via Forged Azure AD Tokens

By: THN
Microsoft on Friday said a validation error in its source code allowed for Azure Active Directory (Azure AD) tokens to be forged by a malicious actor known as Storm-0558 using a Microsoft account (MSA) consumer signing key to breach two dozen organizations. "Storm-0558 acquired an inactive MSA consumer signing key and used it to forge authentication tokens for Azure AD enterprise and MSA

TeamTNT's Cloud Credential Stealing Campaign Now Targets Azure and Google Cloud

By: THN
A malicious actor has been linked to a cloud credential stealing campaign in June 2023 that's focused on Azure and Google Cloud Platform (GCP) services, marking the adversary's expansion in targeting beyond Amazon Web Services (AWS). The findings come from SentinelOne and Permiso, which said the "campaigns share similarity with tools attributed to the notorious TeamTNT cryptojacking crew,"

Critical 'nOAuth' Flaw in Microsoft Azure AD Enabled Complete Account Takeover

A security shortcoming in Microsoft Azure Active Directory (AD) Open Authorization (OAuth) process could have been exploited to achieve full account takeover, researchers said. California-based identity and access management service Descope, which discovered and reported the issue in April 2023, dubbed it nOAuth. "nOAuth is an authentication implementation flaw that can affect Microsoft Azure AD

Microsoft Blames Massive DDoS Attack for Azure, Outlook, and OneDrive Disruptions

Microsoft on Friday attributed a string of service outages aimed at Azure, Outlook, and OneDrive earlier this month to an uncategorized cluster it tracks under the name Storm-1359. "These attacks likely rely on access to multiple virtual private servers (VPS) in conjunction with rented cloud infrastructure, open proxies, and DDoS tools," the tech giant said in a post on Friday. Storm-#### (

Severe Vulnerabilities Reported in Microsoft Azure Bastion and Container Registry

Two "dangerous" security vulnerabilities have been disclosed in Microsoft Azure Bastion and Azure Container Registry that could have been exploited to carry out cross-site scripting (XSS) attacks. "The vulnerabilities allowed unauthorized access to the victim's session within the compromised Azure service iframe, which can lead to severe consequences, including unauthorized data access,

MAAD-AF - MAAD Attack Framework - An Attack Tool For Simple, Fast And Effective Security Testing Of M365 And Azure AD

By: Zion3R

MAAD-AF is an open-source cloud attack tool developed for testing security of Microsoft 365 & Azure AD environments through adversary emulation. MAAD-AF provides security practitioners easy to use attack modules to exploit configurations across different M365/AzureAD cloud-based tools & services.

MAAD-AF is designed to make cloud security testing simple, fast and effective. Through its virtually no-setup requirement and easy to use interactive attack modules, security teams can test their security controls, detection and response capabilities easily and swiftly.

Features

  • Pre & Post-compromise techniques
  • Simple interactive use
  • Virtually no-setup requirements
  • Attack modules for Azure AD
  • Attack modules for Exchange
  • Attack modules for Teams
  • Attack modules for SharePoint
  • Attack modules for eDiscovery

MAAD-AF Attack Modules

  • Azure AD External Recon (Includes sub-modules)
  • Azure AD Internal Recon (Includes sub-modules)
  • Backdoor Account Setup
  • Trusted Network Modification
  • Disable Mailbox Auditing
  • Disable Anti-Phishing
  • Mailbox Deletion Rule Setup
  • Exfiltration through Mailbox Forwarding
  • Gain User Mailbox Access
  • External Teams Access Setup (Includes sub-modules)
  • eDiscovery exploitation (Includes sub-modules)
  • Bruteforce
  • MFA Manipulation
  • User Account Deletion
  • SharePoint exploitation (Includes sub-modules)

Getting Started

Plug & Play - It's that easy!

  1. Clone or download the MAAD-AF github repo to your windows host
  2. Open PowerShell as Administrator
  3. Navigate to the local MAAD-AF directory (cd /MAAD-AF)
  4. Run MAAD_Attack.ps1 (./MAAD_Attack.ps1)

Requirements

  1. Internet accessible Windows host
  2. PowerShell (version 5 or later) terminal as Administrator
  3. The following PowerShell modules are required and will be installed automatically:

Tip: A 'Global Admin' privilege account is recommended to leverage full capabilities of modules in MAAD-AF

Limitations

  • MAAD-AF is currently only fully supported on Windows OS

Contribute

  • Thank you for considering contributing to MAAD-AF!
  • Your contributions will help make MAAD-AF better.
  • Join the mission to make security testing simple, fast and effective.
  • There's ongoing efforts to make the source code more modular to enable easier contributions.
  • Continue monitoring this space for updates on how you can easily incorporate new attack modules into MAAD-AF.

Add Custom Modules

  • Everyone is encouraged to come up with new attack modules that can be added to the MAAD-AF Library.
  • Attack modules are functions that leverage access & privileges established by MAAD-AF to exploit configuration flaws in Microsoft services.

Report Bugs

  • Submit bugs or other issues related to the tool directly in the "Issues" section

Request Features

  • Share those great ideas. Submit new features to add to the MAAD-AFs functionality.

Contact

  • If you found this tool useful, want to share an interesting use-case, bring issues to attention, whatever the reason - I would love to hear from you. You can contact at: maad-af@vectra.ai or post in repository Discussions.


Azure-AccessPermissions - Easy to use PowerShell script to enumerate access permissions in an Azure Active Directory environment

By: Zion3R


Easy to use PowerShell script to enumerate access permissions in an Azure Active Directory environment.

Background details can be found in the accompanied blog posts:


Requirements

To run this script you'll need these two PowerShell modules:

All of these can be installed directly within PowerShell:

PS:> Install-Module Microsoft.Graph
PS:> Install-Module AADInternals
PS:> Install-Module AzureADPreview

Usage

First time use

The script uses a browser-based Login UI to connect to Azure. If you run the tool for the first time you might experience the following error

emulation not set for PowerShell or PowerShell ISE! Would you like set the emulation to IE 11? Otherwise the login form may not work! (Y/N): Y Emulation set. Restart PowerShell/ISE!" dir="auto">
[*] Connecting to Microsoft Graph...
WARNING: WebBrowser control emulation not set for PowerShell or PowerShell ISE!
Would you like set the emulation to IE 11? Otherwise the login form may not work! (Y/N): Y
Emulation set. Restart PowerShell/ISE!

To solve this simply allow PowerShell to emulate the browser and rerun your command.

Example use

Import and run, no argumentes needed.

Note: On your first run you will likely have to authenticate twice (once Microsoft Graph and once against Azure AD Graph). I might wrap this into a single login in the future...

PS:> Import-Module .\Azure-AccessPermissions.ps1



Researchers Discover 3 Vulnerabilities in Microsoft Azure API Management Service

Three new security flaws have been disclosed in Microsoft Azure API Management service that could be abused by malicious actors to gain access to sensitive information or backend services. This includes two server-side request forgery (SSRF) flaws and one instance of unrestricted file upload functionality in the API Management developer portal, according to Israeli cloud security firm Ermetic. "

Newly Discovered "By-Design" Flaw in Microsoft Azure Could Expose Storage Accounts to Hackers

A "by-design flaw" uncovered in Microsoft Azure could be exploited by attackers to gain access to storage accounts, move laterally in the environment, and even execute remote code. "It is possible to abuse and leverage Microsoft Storage Accounts by manipulating Azure Functions to steal access-tokens of higher privilege identities, move laterally, potentially access critical business assets, and

Microsoft Fixes New Azure AD Vulnerability Impacting Bing Search and Major Apps

Microsoft has patched a misconfiguration issue impacting the Azure Active Directory (AAD) identity and access management service that exposed several "high-impact" applications to unauthorized access. "One of these apps is a content management system (CMS) that powers Bing.com and allowed us to not only modify search results, but also launch high-impact XSS attacks on Bing users," cloud security

Researchers Detail Severe "Super FabriXss" Vulnerability in Microsoft Azure SFX

Details have emerged about a now-patched vulnerability in Azure Service Fabric Explorer (SFX) that could lead to unauthenticated remote code execution. Tracked as CVE-2023-23383 (CVSS score: 8.2), the issue has been dubbed "Super FabriXss" by Orca Security, a nod to the FabriXss flaw (CVE-2022-35829, CVSS score: 6.2) that was fixed by Microsoft in October 2022. "The Super FabriXss vulnerability

Microsoft Azure job outlook

Introduction The business world is relocating to the cloud and the trend is strong. It has been predicted that by the end of 2020, 83% of all businesses will be in the cloud and by 2021, the percentage of workloads processed in cloud data centers will reach 94%. By 2022, cloud services will be three […]

The post Microsoft Azure job outlook appeared first on Infosec Resources.


Microsoft Azure job outlook was first posted on October 20, 2020 at 8:05 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at darren.dalasta@infosecinstitute.com

Microsoft Azure Fundamentals (AZ-900) Domains Overview

Introduction The Microsoft Azure Fundamentals (AZ-900) certification exam is a great way for someone new to the field of cloud computing to demonstrate knowledge, interest and experience to current or potential employers. In this article, we will offer an overview of Microsoft Azure’s most popular certification — the Microsoft Certified Azure Fundamentals certification. We will […]

The post Microsoft Azure Fundamentals (AZ-900) Domains Overview appeared first on Infosec Resources.


Microsoft Azure Fundamentals (AZ-900) Domains Overview was first posted on October 6, 2020 at 8:02 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at darren.dalasta@infosecinstitute.com

Microsoft Azure Certification: Overview And Career Path

Introduction The global COVID-19 pandemic has forced individuals and organizations to adopt new ways of doing daily tasks, from working to learning. It has also accelerated the journey to the cloud for many organizations; for others, it has made them more reliant on the cloud. With that move comes a demand for professionals with cloud […]

The post Microsoft Azure Certification: Overview And Career Path appeared first on Infosec Resources.


Microsoft Azure Certification: Overview And Career Path was first posted on October 5, 2020 at 8:03 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at darren.dalasta@infosecinstitute.com

Cloud Security Is Simple, Absolutely Simple.

“Cloud security is simple, absolutely simple. Stop over complicating it.”

This is how I kicked off a presentation I gave at the CyberRisk Alliance, Cloud Security Summit on Apr 17 of this year. And I truly believe that cloud security is simple, but that does not mean easy. You need the right strategy.

As I am often asked about strategies for the cloud, and the complexities that come with it, I decided to share my recent talk with you all. Depending on your preference, you can either watch the video below or read the transcript of my talk that’s posted just below the video. I hope you find it useful and will enjoy it. And, as always, I’d love to hear from you, find me @marknca.

For those of you who prefer to read rather than watch a video, here’s the transcript of my talk:

Cloud security is simple, absolutely simple. Stop over complicating it.

Now, I know you’re probably thinking, “Wait a minute, what is this guy talking about? He is just off his rocker.”

Remember, simple doesn’t mean easy. I think we make things way more complicated than they need to be when it comes to securing the cloud, and this makes our lives a lot harder than they need to be. There’s some massive advantages when it comes to security in the cloud. Primarily, I think we can simplify our security approach because of three major reasons.

The first is integrated identity and access management. All three major cloud providers, AWS, Google and Microsoft offer fantastic identity, and access management systems. These are things that security, and [inaudible 00:00:48] professionals have been clamouring for, for decades.

We finally have this ability, we need to take advantage of it.

The second main area is the shared responsibility model. We’ll cover that more in a minute, but it’s an absolutely wonderful tool to understand your mental model, to realize where you need to focus your security efforts, and the third area that simplifies security for us is the universal application of APIs or application programming interfaces.

These give us as security professionals the ability to orchestrate. and automate a huge amount of the grunt work away. These three things add up to, uh, the ability for us to execute a very sophisticated, uh, or very difficult to pull off, uh, security practice, but one that ultimately is actually pretty simple in its approach.

It’s just all the details are hard and we’re going to use these three advantages to make those details simpler. So, let’s take a step back for a second and look at what our goal is.

What is the goal of cybersecurity? That’s not something you hear quite often as a question.

A lot of the time you’ll hear the definition of cybersecurity is, uh, about, uh, securing the confidentiality, integrity, and availability of information or data. The CIA triad, different CIA, but I like to phrase this in a different way. I think the goal is much clearer, and the goal’s much simpler.

It is to make sure that whatever you’re building works as intended and only as intended. Now, you’ll realize you can’t accomplish this goal just as a security team. You need to work with your, uh, developers, you need to work with operations, you need to work with the business units, with the end users of your application as well.

This is a wonderful way of phrasing our goal, and realizing that we’re all in this together to make sure whatever you’re building works as intended, and only as intended.

Now, if we move forward, and we look at who are we up against, who’s preventing our stuff from working, uh, well?

You look at normally, you think of, uh, who’s attacking our systems? Who are the risks? Is it nation states? Is it maybe insider threats? While these are valid threats, they’re really overblown. You’re… don’t have to worry about nation state attacks.

If you’re a nation state, worry about it. If you’re not a nation state, you don’t have to worry about it because frankly, there’s nothing you can do to stop them. You can slow them down a little bit, but by definition, they’re going to get through your resources.

As far as insider attacks, this is an HR problem. Treat your people well. Um, check in with them, and have a strong information management policy in place, and you’re going to reduce this threat naturally. If you go hunting for people, you’re going to create the very threats that you’re looking at.

So, it brings us to the next set. What about cyber criminals? You know, we do have to worry about cyber criminals.

Cyber criminals are targeting systems simply because these systems are online, these are profit motivated criminals who are organized, and have a good set of tools, so we absolutely need to worry about them, but there’s a more insidious or more commonplace, maybe a simpler threat that we need to worry about, and that’s one of mistakes.

The vast majority of issues that happen around data breaches around security vulnerabilities in the cloud are mistake driven. In fact, to the point where I would not even worry about cyber criminals simply because all the work we’re going to do to focus on, uh, preventing mistakes.

And catching, and rectifying the stakes really, really quickly is going to uh, you a cover all the stuff that we would have done to block out cyber criminals as well, so mistakes are very common because people are using a lot more services in the cloud.

You have a lot more, um, parts and moving, uh, complexity in your deployment, um, and you’re going to make a mistake, which is why you need to put automated systems in place to make sure that those mistakes don’t happen, or if they do happen that they’re caught very, very quickly.

This applies to standard DevOps, the philosophies for building. It also applies to security very, very wonderfully, so this is the main thing we’re going to focus on.

So, if we look at that sum up together, we have our goal of making sure whatever we’re building works as intended, and only as intended, and our major issue here, the biggest risk to this is simple mistakes and misconfigurations.

Okay, so we’re not starting from ground zero here. We can learn from others, and the first place we’re going to learn is the shared responsibility model. The shared responsibility applies to all cloud service providers.

If you look on the left hand side of the slide here, you’ll see the traditional on premise model. We roughly have six areas where something has to be done roughly daily, whether it’s patching, maintenance, uh, just operational visibility, monitoring, that kind of thing, and in a traditional on premise environment, you’re responsible for all of it, whether it’s your team, or a team underneath your organization.

Somewhere within your tree, people are on the hook for doing stuff daily. Here when we move into an infrastructure, so getting a virtual machine from a cloud provider right off the bat, half of the responsibilities are pushed away.

That’s a huge, huge win.

And, as we move further and further to the right to more managed service, or staff level services, we have less and less daily responsibilities.

Now, of course, you always still have to verify that the cloud service provider’s doing what they, uh, say they’re doing, which is why certifications and compliance frameworks come into play, uh, but the bottom line is you’re doing less work, so you can focus on fewer areas.

Um, that is, or I should say not less work, but you’re doing, uh, less broad of a work.

So you can have that deeper focus, and of course, you always have to worry about service configuration. You are given knobs and dials to turn to lock things down. You should use them like things like encrypting, uh, all your data at rest.

Most of the time it’s an easy check box, but it’s up to you to check it ‘cause it’s your responsibility.

We also have the idea of an adoption framework, and this applies for Azure, for AWS and for Google, uh, and what they do is they help you map out your business processes.

This is important to security, because it gives you the understanding of where your data is, what’s important to the business, where does it lie, who needs to touch it, and access it and process it.

That also gives us the idea, uh, or the ability to identify the stakeholders, so that we know, uh, you know, who’s concerned about this data, who is, has an investment in this data, and finally it helps to, to deliver an action plan.

The output of all of these frameworks is to deliver an action plan to help you migrate into the cloud and help you to continuously evolve. Well, it’s also a phenomenal map for your security efforts.

You want to prioritize security, this is how you do it. You get it through the adoption framework, understanding what’s important to the business, and that lets you identify critical systems and areas for your security.

Again, we want to keep things simple, right? And, the third, uh, the o- other things we want to look at is the CIS foundations. They have them for AWS, Azure and GCP, um, and these provide a prescriptive guidance.

They’re really, um, a strong baseline, and a checklist of tasks that you can accomplish, um, or take on, on your, uh, take on, on your own, excuse me, uh, in order to, um, you know, basically cover off the really basics is encryption at rest on, um, you know, do I make sure that I don’t have, uh, things needlessly exposed to the internet, that type of thing.

Really fantastic reference point and a starting point for your security practice.

Again, with this idea of keeping things as simple as possible, so when it comes to looking at our security policy, we’ve used the frameworks, um, and the baseline to kind of set up a strong, uh, start to understand, uh, where the business is concerned, and to prioritize.

And, the first question we need to ask ourselves as security practitioners, what happened? If we, if something happens, and we ask what happened?

Do we have the ability to answer this question? So, that starts us off with logging and auditing. This needs to be in place before something happened. Let me just say that again, before something happened, you need [laughs] to be able to have this information in place.

Now, uh, this is really, uh, to ask these key questions of what happened in my account, and who, or what made that thing happen?

So, this starts in the cloud with some basic services. Uh, for AWS it’s cloud trail, for Azure, it’s monitor, and for Google Cloud it used to be called Stackdriver, it is now the Google Cloud operations suite, so these need to be enabled on at full volume.

Don’t worry, you can use some lifecycle rules on the data source to keep your costs low.

But, this gives you that layer, that basic auditing and logging layer, so that you can answer that question of what happened?

So, the next question you want to ask yourself or have the ability to answer is who’s there, right? Who’s doing what in my account? And, that comes down to identity.

We’ve already mentioned this is one of the key pillars of keeping security simple, and getting that highly effective security in your cloud.

[00:09:00] So here you’re answering the questions of who are you, and what are you allowed to do? This is where we get a very simple privilege, uh, or principle in security, which is the principle of least privilege.

You want to give an identity, so whether that’s a user, or a role, or a service, uh, only the privileges they, uh, require that are essential to perform the task that, uh, they are intended to do.

Okay?

So, basically if I need to write a file into a storage, um, folder or a bucket, I should only have the ability to write that file. I don’t need to read it, I don’t need to delete it, I just need to write to it, so only give me that ability.

Remember, that comes back to the other pillar of simple security here of, of key cloud security, is integrated identity.

This is where it really takes off, is that we start to assign very granular access permissions, and don’t worry, we’re going to use the APIs to automate all this stuff, so that it’s not a management headache, but the principle of these privilege is absolutely critical here.

The services you’re going to be using, amazingly, all three cloud providers got in line, and named them the same thing. It’s IAM, identity access management, whether that’s AWS, Azure or Google Cloud.

Now, the next question we’re going to a- ask ourselves are the areas where we’re going to be looking at is really where should I be focusing security controls? Where should I be putting stuff in place?

Because up until now we’ve really talked about leveraging what’s available from the cloud service providers, and you absolutely should available, uh, maximize your usage of their, um, native and primitive, uh, structures primitive as far as base concepts, not as, um, refined.

They’re very advanced controls and, but there are times where you’re going to need to put in your own controls, and these are the areas you’re going to focus on, so you’re going to start with networking, right?

So, in your networking, you’re going to maximize the native structures that are available in the cloud that you’re in, so whether that’s a project structure in Google Cloud, whether that’s a service like transit gateway in AWS, um, and all of them have this idea of a VPC or virtual private cloud or virtual network that is a very strong boundary for you to use.

Remember, most of the time you’re not charged for the creation of those. You have limits in your accounts, but accounts are free, and you can keep adding more, uh, virtual networks. You may be saying, wait a minute, I’m trying to simplify things.

Actually, having multiple virtual networks or virtual private clouds ends up being far simpler because each of them has a task. You go, this application runs in this virtual private cloud, not a big shared one in this specific VPC, and that gives you this wonderfully strong security boundaries, and a very simple way of looking at one VPC, one action, very much the Unix philosophy in play.

Key here though is understanding that while all of the security controls in place for your service provider, um, give you, so, you know, whether it’s VPCs, routing tables, um, uh, access control lists, security groups, all the SDN features that they’ve got in place.

These really help you figure out whether service A or system A is allowed to talk to B, but they don’t tell you what they’re saying.

And, that’s where additional controls called an IPS, or intrusion prevention system come into play, and you may want to look at getting a third party control in to do that, because none of the th- big three cloud providers offer an IPS at this point.

[00:12:00] But that gives you the ability to not just say, “Hey, you’re allowed to talk to each other.” But, to monitor that conversation, to ensure that there’s not malicious code being passed back and forth between systems that nobody’s trying a denial of service attack.

A whole bunch of extra things on there have, so that’s where IPS comes into play in your network defense. Now, we look at compute, right?

We can have compute in various forms, whether that’s in serverless functions, whether that’s in containers, manage containers, whether that’s in traditional virtual machines, but all the principles are the same.

You want to understand where the shared responsibility line is, how much is on your plate, how much is on the CSPs?

You want to understand that you need to harden the EOS, or the service, or both in some cases, make sure that, that’s locked down, so have administrator passwords. Very, very complicated.

Don’t log into these systems, uh, you know, because you want to be fixing things upstream. You want to be fixing things in the build pipeline, not logging into these systems directly, and that’s a huge thing for, uh, systems people to get over, but it’s absolutely essential for security, and you know what?

It’s going to take a while, but there’s some tricks there you can follow with me. You can see, uh, on the slides, uh, at Mark, that is my social everywhere, uh, happy to walk you through the next steps.

This idea of this presentation’s really just the simple basics to start with, to give you that overview of where to focus your time, and, dispel that myth that cloud security is complicating things.

It is a huge path is simplicity, which is a massive lens, or for security.

So, the last area you want to focus here is in data and storage. Whether this is databases, whether this is big blob storage, or, uh, buckets in AWS, it doesn’t really matter the principles, again, all the same.

You want to encrypt your data at rest using the native cloud provided, uh, cloud service provider, uh, features functionality, because most of the time it’s just give it a key address, and give it a checkbox, and you’re good to go.

It’s never been easier to encrypt things, and there is no excuse for it and none of the providers charge extra for, uh, encryption, which is amazing, and you absolutely want to be taking advantage of that, and you want to be as granular as possible with your IAM, uh, and as reasonable, okay?

So, there’s a line here, and a lot of the data stores that are native to the cloud service providers, you can go right down to the data cell level and say, Mark has access, or Mark doesn’t have access to this cell.

That can be highly effective, and maybe right for your use case. It might be too much as well.

But, the nice thing is that you have that option. It’s integrated, it’s pretty straightforward to implement, and then, uh, when we look here, uh, sorry. and then, finally you want to be looking at lifecycle strategies to keep your costs under control.

Um, data really spins out of control when you don’t have to worry about capacity. All of the cloud service providers have some fantastic automations in place.

Basically, just giving you, uh, very simple rules to say, “Okay, after 90 days, move this over to cheaper storage. After 180 days, you know, get rid of it completely, or put it in cold storage.”

Take advantage of those or your bill’s going to spiral out of control, and, and that relates to availability ‘cause uh, uh, and reliability, ‘cause the more you’re spending on that kind of stuff, the less you have to spend on other areas like security and operational efficiency.

So, that brings us to our next big security question. Is this working?

[00:15:00] How do you know if any of this stuff is working? Well, you want to talk about the concept of traceability. Traceability is a, you know, somewhat formal definition, but for me it really comes down to where did this come from, who can access it, and when did they access it?

That ties very closely with the concept of observability. Basically, the ability to look at, uh, closed systems and to infer what’s going on inside based on what’s coming into that system, and what’s leaving that system, really what’s going on.

There’s some great tools here from the service providers. Again, you want to look at, uh, Amazon CloudWatch, uh, Azure Monitor and the Google Cloud operations, uh, suite. Um, and here this leads us to the key, okay?

This is the key to simplifying everything, and I know we’ve covered a ton in this presentation, but I really want you to take a good look at this slide, and again, hit me up, uh, @marknca, happy to answer any questions with, questions afterwards as well here, um, that this will really, really make this simple, and this will really take your security practice to the next level.

If the idea of something happened in your, cloud system, right? In your deployment, there’s a trigger, and then, it either is generating an event or a log.

If you go the bottom row here, you’ve got a log, which you can then react to in a function to deliver some sort of result. That’s the slow-lane on the bottom.

We’re talking minutes here. You also have the top lane where your trigger fires off an event, and then, you react to that with a function, and then, you get a result in the fast lane.

These things happen in seconds, sub-second time. You start to build out your security practice based on this model.

You start automating more and more in these functions, whether it’s, uh, Lambda, whether it’s Cloud Functions, whether it’s Azure Functions, it doesn’t matter.

The CSPs all offer the same core functionality here. This is the critical, critical success metric, is that when you start reacting in the fast lane automatically to things, so if you see that a security event is triggered from like your malware, uh, on your, uh, virtual machine, you can lock that off, and have a new one spin up automatically.

Um, if you’re looking for compliance stuff, the slow lane is the place to go, because it takes minutes.

Reactions happen up top, more, um, stately or more sedate things, so somebody logging into a system is both up top and down low, so up top, if you logged into a VPC or into, um, an instance, or a virtual machine, you’d have a trigger fire off and maybe ask me immediately, “Mark, did you log into the system? Uh, ‘cause you’re, you know, you’re not supposed to be.”

But then I’d respond and say, “Yeah, I, I did log in.” So, immediately you don’t have to respond. It’s not an incident response scenario, but on the bottom track, maybe you’re tracking how many times I’ve logged in.

And after the three or fourth time maybe someone comes by, and has a chat with me, and says, “Hey, do you keep logging into these systems? Can’t you fix it upstream in the deployment, uh, and build a pipeline ‘cause that’s where we need to be moving?”

So, you’ll find this balance, and this concept, I just wanted to get into your heads right now of automating your security practice. If you have a checklist, it should be sitting in a model like this, because it’ll help you, uh, reduce your workload, right?

The idea is to get as much automated possible, and keep things in very clear, and simple boundaries, and what’s more simple than having every security action listed as an automated function, uh, sitting in a code repository somewhere?

[00:18:00] Fantastic approach to modern security practice in the cloud. Very simple, very clear. Yes, difficult to implement. It can be, but it’s an awesome, simple mental model to keep in your head that everything gets automated as a function based on a trigger somewhere.

So, what are the keys to success? What are the keys to keeping this cloud security thing simple? And, hopefully you’ve realized the difference between a simple mental model, and the challenges, uh, in, uh, implementation.

It can be difficult. It’s not easy to implement, but the mental model needs to be kept simple, right? Keep things in their own VPCs, and their own accounts, automate everything. Very, very simple approach. Everything fits into this s- into this structure, so the keys here are remembering the goal.

Make sure that cybersecurity, uh, is making sure that whatever you build works as intended and only as intended. It’s understanding the shared responsibility model, and it’s really looking at, uh, having a plan through cloud adoption frameworks, how to build well, which is a, uh, a concept called the Well-Architected Framework.

It’s specific to AWS, but it’s generic, um, its principles, it can be applied everywhere. We didn’t cover it here, but I’ll put the links, um, in the materials for you, uh, as well as remembering systems over people, right?

Adding the right controls at the right time, uh, and then, finally observing and react. Be vigilant, practice. You’re not going to get this right out of the gates, uh, perfect.

You’re going to have to refine, iterate, and then it’s extremely cloud friendly. That is the cloud model is, get it out there, iterate quickly, but putting the structures in place, you’re not going to make sure that you’re not doing that in an insecure manner.

Thank you very much, uh, here’s a couple of links that’ll help you out before we take some Q&A here, um, trendmicro.com/cloud will get you to the products to learn more. We’re also doing this really cool streaming.

Uh, I host a show called Let’s Talk Cloud. Um, we uh, interview experts, uh, and have a great conversation around, um, what they’re talking about, uh, in the cloud, what they’re working on, and not just around security, but just in building in general.

You can hit that up at trendtalks.fyi. Um, and again, hit me up on social @marknca.

So, we have a couple of questions to kick this off, and you can put more questions in the webinar here, and they will send them along, or answer them in kind if they can.

Um, and that’s really what these are about, is the interaction is getting that, um, to and from. So, the first question that I wanted to tackle is an interesting one, and it’s really that systems over people.

Um, you heard me mention it in the, uh, in the end and the question is really what does that mean systems over people? Isn’t security really about people’s expertise?

And, yes and no, so if you are a SOC analyst, if you are working in a security, uh, role right now, I am really confident saying that 80%, 90% of what you do right now could be delegated out to a system.

So, if you were looking at log lines, and stuff that should be done by systems and bubble up, just the goal for you to investigate to do what people are good at in systems are bad at, so systems mean, uh, you know, putting in, uh, to build pipeline, putting in container scanning in the build pipeline, so that you have to manually scan stuff, right to get rid of the basics. Is that a pen test? 100% no.

Um, but it gets rid of that, hey, you didn’t upgrade to, um, you know, this version of this library.

[00:21:00] That’s all automated, and those, the more systems you get in place, the more you as a security professional, or your security team will be able to focus on where they can really deliver value and frankly, where it’s more interesting work, so that’s what systems over people mean, is basically automate as much as you can to get people doing what people are really good at, and to make sure that the systems catch what we make as mistakes all the time.

If you accidentally try to push an old build out, you know that systems should stop that, if you push a build that hasn’t been checked by that container scanning or by, um, you know, it doesn’t have the appropriate security policy in place.

Systems should catch all that humans shouldn’t have to worry about it at all. That’s systems over processing. You saw that on the, uh, keys to success slide here. I’ll just pull it up. Um, you know, is that, that’s absolutely key.

Another question that we had, uh, was what we didn’t get into here, which was around the Well-Architected Framework. Now, this is a document that was published by AWS, uh, a number of years back, and they’ve kept it going.

They’ve evolved it and essentially it has five pillars. Um, performance, efficiency, uh, op- reliability, security, cost optimization, and operational excellence. Hey, I’ve got all five.

Um, and really [laughs] what that is, is it’s about how to take advantage of these cloud tools.

Now, AWS publishes it, but honestly it applies to Azure, it applies to Google Cloud as well. It’s not service specific. It teaches you how to build in the cloud, and obviously security is one of those big pillars, but it’s… so talking about teaching you how to make those trade offs, how to build an innovation flywheel, so that you have an idea, test it, uh, get the feedback from it, and move forward.

Um, and that’s really, really key. Again, now you should be reading that even if you are an Azure, or GCP customer or, uh, that’s where you’re putting your most of your stuff, because it’s really about the principles, and everything we do, and encourage people to build well, it means that there’s less security issues, right?

Especially we know that the number one problem is mistakes.

That leads to the last question we have here, which is about that, how can I say that cyber criminals, you don’t need to worry about them.

You need to worry about mistakes? That’s a good question. It’s valid, and, um, Trend Micro does a huge amount of research around cyber criminals. I do a whole huge amount of research around cyber criminals.

Uh, my training, by training, and by professional experience. I’m a forensic investigator. This is what I do is take down cyber crimes. Um, but I think mistakes are the number one thing that we deal with in the cloud simply because of the underlying complexity.

I know it’s ironic, and to talk about simplicity, to talk about complexity, but the idea is, um, is that you look at all the major breaches, especially around s3 buckets, those are all m- based on mistake.

There’ve been billions, and billions, and billions of records, and, uh, millions of dollars of damage exposed because of simple mistakes, and that is far more common, uh, than cyber criminals.

And yes, cyber crimes you have [inaudible 00:23:32] worry. You have to worry about them, but everything you’re going to do to fix mistakes, and to put systems in place to stop those mistakes from happening is also going to be for your pr- uh, protection up against cyber criminals, and honestly, if you’re the guy who runs around your organization’s screaming about cyber criminals all the time, you’re far less credible than if you’re saying, “Hey, I want to make sure that we build really, really well, and don’t make mistakes.”

Thank you for taking the time. My name’s Mark Nunnikhoven. I’m the vice president of cloud research at Trend Micro. I’m also an AWS community hero, and I love this stuff. Hit me up on social @marknca. Happy to chat more.

The post Cloud Security Is Simple, Absolutely Simple. appeared first on .

The Fear of Vendor Lock-in Leads to Cloud Failures

 

Vendor lock-in has been an often-quoted risk since the mid-1990’s.

Fear that by investing too much with one vendor, an organization reduces their options in the future.

Was this a valid concern? Is it still today?

 

The Risk

Organizations walk a fine line with their technology vendors. Ideally, you select a set of technologies that not only meet your current need but that align with your future vision as well.

This way, as the vendor’s tools mature, they continue to support your business.

The risk is that if you have all of your eggs in one basket, you lose all of the leverage in the relationship with your vendor.

If the vendor changes directions, significantly increases their prices, retires a critical offering, the quality of their product drops, or if any number of other scenarios happen, you are stuck.

Locking in to one vendor means that the cost of switching to another or changing technologies is prohibitively expensive.

All of these scenarios have happened and will happen again. So it’s natural that organizations are concerned about lock-in.

Cloud Maturity

When the cloud started to rise to prominence, the spectre of vendor lock-in reared its ugly head again. CIOs around the world thought that moving the majority of their infrastructure to AWS, Azure, or Google Cloud would lock them into that vendor for the foreseeable future.

Trying to mitigate this risk, organizations regularly adopt a “cloud neutral” approach. This means they only use “generic” cloud services that can be found from the providers. Often hidden under the guise of a “multi-cloud” strategy, it’s really a hedge so as not to lose position in the vendor/client relationship.

In isolation, that’s a smart move.

Taking a step back and looking at the bigger picture starts to show some of the issues with this approach.

Automation

The first issue is the heavy use of automation in cloud deployments means that vendor “lock-in” is not nearly as significant a risk as in was in past decades. The manual effort required to make a vendor change for your storage network used to be monumental.

Now? It’s a couple of API calls and a consumption-based bill adjusted by the megabyte. This pattern is echoed across other resource types.

Automation greatly reduces the cost of switching providers, which reduces the risk of vendor lock-in.

Missing Out

When your organization sets the mandate to only use the basic services (server-based compute, databases, network, etc.) from a cloud service provider, you’re missing out one of the biggest advantages of moving to the cloud; doing less.

The goal of a cloud migration is to remove all of the undifferentiated heavy lifting from your teams.

You want your teams directly delivering business value as much of the time as possible. One of the most direct routes to this goal is to leverage more and more managed services.

Using AWS as an example, you don’t want to run your own database servers in Amazon EC2 or even standard RDS if you can help it. Amazon Aurora and DynamoDB generally offer less operation impacts, higher performance, and lower costs.

When organizations are worried about vendor lock-in, they typically miss out on the true value of cloud; a laser focus on delivering business value.

 

But Multi-cloud…

In this new light, a multi-cloud strategy takes on a different aim. Your teams should be trying to maximize business value (which includes cost, operational burden, development effort, and other aspects) wherever that leads them.

As organizations mature in their cloud usage and use of DevOps philosophies, they generally start to cherry pick managed services from cloud providers that best fit the business problem at hand.

They use automation to reduce the impact if they have to change providers at some point in the future.

This leads to a multi-cloud split that typically falls around 80% in one cloud and 10% in the other two. That can vary depending on the situation but the premise is the same; organizations that thrive have a primary cloud and use other services when and where it makes sense.

 

Cloud Spanning Tools

There are some tools that are more effective when they work in all clouds the organization is using. These tools range from software products (like deployment and security tools) to metrics to operational playbooks.

Following the principles of focusing on delivering business value, you want to actively avoid duplicating a toolset unless it’s absolutely necessary.

The maturity of the tooling in cloud operations has reached the point where it can deliver support to multiple clouds without reducing its effectiveness.

This means automation playbooks can easily support multi-cloud (e.g.,  Terraform). Security tools can easily support multi-cloud (e.g., Trend Micro Cloud One™).  Observability tools can easily support multi-cloud (e.g., Honeycomb.io).

The guiding principle for a multi-cloud strategy is to maximize the amount of business value the team is able to deliver. You accomplish this by becoming more efficient (using the right service and tool at the right time) and by removing work that doesn’t matter to that goal.

In the age of cloud, vendor lock-in should be far down on your list of concerns. Don’t let a long standing fear slow down your teams.

The post The Fear of Vendor Lock-in Leads to Cloud Failures appeared first on .

Celebrating Decades of Success with Microsoft at the Security 20/20 Awards

Effective collaboration is key to the success of any organization. But perhaps none more so than those working towards the common goal of securing our connected world. That’s why Trend Micro has always been keen to reach out to industry partners in the security ecosystem, to help us collectively build a safer world and improve the level of protection we can offer our customers. As part of these efforts, we’ve worked closely with Microsoft for decades.

Trend Micro is therefore doubly honored to be at the Microsoft Security 20/20 awards event in February, with nominations for two of the night’s most prestigious prizes.

Better together

No organization exists in a vacuum. The hi-tech, connectivity-rich nature of modern business is the source of its greatest power, but also one of its biggest weaknesses. Trend Micro’s mission from day one has been to make this environment as safe as possible for our customers. But we learned early on that to deliver on this vision, we had to collaborate. That’s why we work closely with the world’s top platform and technology providers — to offer protection that is seamless and optimized for these environments.

As a Gold Application Development Partner we’ve worked for years with Microsoft to ensure our security is tightly integrated into its products, to offer protection for Azure, Windows and Office 365 customers — at the endpoint, on servers, for email and in the cloud. It’s all about simplified, optimized security designed to support business agility and growth.

Innovating our way to success

This is a vision that comes from the very top. For over three decades, our CEO and co-founder Eva Chen has been at the forefront of industry leading technology innovation and collaborative success at Trend Micro. Among other things during that time, we’ve released:

  • The world’s first hardware-based system lockdown technology (StationLock)
  • Innovative internet gateway virus protection (InterScan VirusWall)
  • The industry’s first two-hour virus response service-level agreement
  • The first integrated physical-virtual security offering, with agentless threat protection for virtualized desktops (VDI) and data centers (Deep Security)
  • The first ever mobile app reputation service (MARS)
  • AI-based writing-style analysis for protection from Business Email Compromise (Writing Style DNA)
  • Cross-layer detection and response for endpoint, email, servers, & network combined (XDR)
  • Broadest cloud security platform as a service (Cloud One)

Two awards

We’re delighted to have been singled out for two prestigious awards at the Microsoft Security 20/20 event, which will kick off RSA Conference this year:

Customer Impact

At Trend Micro, the customer is at the heart of everything we do. It’s the reason we have hundreds of researchers across 15 threat centers around the globe leading the fight against emerging black hat tools and techniques. It’s why we partner with leading technology providers like Microsoft. And it’s why the channel is so important for us.

Industry Changemaker: Eva Chen

It goes without saying that our CEO and co-founder is an inspirational figure within Trend Micro. Her vision and strong belief that our only real competition as cybersecurity vendors are the bad guys and that the industry needs to stand united against them to make the digital world a safer place, guides the over 6000 employees every day. But she’s also had a major impact on the industry at large, working tirelessly over the years to promote initiatives that have ultimately made our connected world more secure. It’s not an exaggeration to say that without Eva’s foresight and dedication, the cybersecurity industry would be a much poorer place.

We’re all looking forward to the event, and for the start of 2020. As we enter a new decade, Trend Micro’s innovation and passion to make the digital world a safer place has never been more important.

 

The post Celebrating Decades of Success with Microsoft at the Security 20/20 Awards appeared first on .

❌