Normal view

Webinar: How to Automate Exposure Validation to Match the Speed of AI Attacks

29 April 2026 at 12:02
In February 2026, researchers uncovered a shift that completely changed the game: threat actors are now using custom AI setups to automate attacks directly into the kill chain. We aren't just talking about AI writing better phishing emails anymore. We’re talking about autonomous agents mapping Active Directory and seizing Domain Admin credentials in minutes. The problem? Most defensive workflows

Why Secure Data Movement Is the Zero Trust Bottleneck Nobody Talks About

28 April 2026 at 11:58
Every security program is betting on the same assumption: once a system is connected, the problem is solved. Open a ticket, stand up a gateway, push the data through. Done. That assumption is wrong. It is also a major reason Zero Trust programs stall. New research my team just published puts numbers on it. The Cyber360: Defending the Digital Battlespace report, based on a survey of 500 security

After Mythos: New Playbooks For a Zero-Window Era

28 April 2026 at 10:30
When patching isn’t fast enough, NDR helps contain the next era of threats. If you’ve been tracking advancements in AI, you know the exploit window, the short buffer that organizations relied on to patch and protect after a vulnerability disclosure, is closing fast. Anthropic’s new model, Claude Mythos, and its Project Glasswing, showed that finding exploitable vulnerabilities and subtle cracks

Mythos Changed the Math on Vulnerability Discovery. Most Teams Aren't Ready for the Remediation Side

27 April 2026 at 11:58
Anthropic’s Claude Mythos Preview has dominated security discussions since its April 7 announcement. Early reporting describes a powerful cybersecurity-focused AI system capable of identifying vulnerabilities at scale and raising serious questions about how quickly organizations can validate, prioritize, and remediate what it finds. The debate that followed has mostly focused on the right

Bridging the AI Agent Authority Gap: Continuous Observability as the Decision Engine

24 April 2026 at 11:49
The AI Agent Authority Gap - From Ungoverned to Delegation As discussed in our previous article, AI agents are exposing a structural gap in enterprise security, but the problem is often framed too narrowly. The issue is not simply that agents are new actors. It is that agents are delegated actors. They do not emerge with independent authority. They are triggered, invoked, provisioned, or

[Webinar] Mythos Reality Check: Beating Automated Exploitation at AI Speed

23 April 2026 at 12:03
Imagine a world where hackers don't sleep, don't take breaks, and find weak spots in your systems instantly. Well, that world is already here. Thanks to AI, attackers are now launching automated, large-scale exploits faster than ever before. The time you have to fix a vulnerability before it gets attacked is shrinking to zero. We call this the Collapsing Exploit Window, and it means your

Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them?

23 April 2026 at 11:30
Last week, Anthropic announced Project Glasswing, an AI model so effective at discovering software vulnerabilities that they took the extraordinary step of postponing its public release. Instead, the company has given access to Apple, Microsoft, Google, Amazon, and a coalition of others to find and patch bugs before adversaries can. Mythos Preview, the model that led to Project Glasswing, found

Toxic Combinations: When Cross-App Permissions Stack into Risk

22 April 2026 at 10:41
On January 31, 2026, researchers disclosed that Moltbook, a social network built for AI agents, had left its database wide open, exposing 35,000 email addresses and 1.5 million agent API tokens across 770,000 active agents. The more worrying part sat inside the private messages. Some of those conversations held plaintext third-party credentials, including OpenAI API keys shared between agents,

5 Places where Mature SOCs Keep MTTR Fast and Others Waste Time

21 April 2026 at 13:00
Security teams often present MTTR as an internal KPI. Leadership sees it differently: every hour a threat dwells inside the environment is an hour of potential data exfiltration, service disruption, regulatory exposure, and brand damage.  The root cause of slow MTTR is almost never "not enough analysts." It is almost always the same structural problem: threat intelligence that exists

No Exploit Needed: How Attackers Walk Through the Front Door via Identity-Based Attacks

21 April 2026 at 11:30
The cybersecurity industry has spent the last several years chasing sophisticated threats like zero-days, supply chain compromises, and AI-generated exploits. However, the most reliable entry point for attackers still hasn't changed: stolen credentials. Identity-based attacks remain a dominant initial access vector in breaches today. Attackers obtain valid credentials through credential stuffing

Why Most AI Deployments Stall After the Demo

20 April 2026 at 11:30
The fastest way to fall in love with an AI tool is to watch the demo. Everything moves quickly. Prompts land cleanly. The system produces impressive outputs in seconds. It feels like the beginning of a new era for your team. But most AI initiatives don't fail because of bad technology. They stall because what worked in the demo doesn't survive contact with real operations. The gap between a

[Webinar] Eliminate Ghost Identities Before They Expose Your Enterprise Data

18 April 2026 at 08:07
In 2024, compromised service accounts and forgotten API keys were behind 68% of cloud breaches. Not phishing. Not weak passwords. Unmanaged non-human identities that nobody was watching. For every employee in your org, there are 40 to 50 automated credentials: service accounts, API tokens, AI agent connections, and OAuth grants. When projects end or employees leave, most

Hidden Passenger? How Taboola Routes Logged-In Banking Sessions to Temu

16 April 2026 at 10:30
A bank approved a Taboola pixel. That pixel quietly redirected logged-in users to a Temu tracking endpoint. This occurred without the bank’s knowledge, without user consent, and without a single security control registering a violation. Read the full technical breakdown in the Security Intelligence Brief. Download now → The "First-Hop Bias" Blind Spot Most&

Deterministic + Agentic AI: The Architecture Exposure Validation Requires

15 April 2026 at 11:30
Few technologies have moved from experimentation to boardroom mandate as quickly as AI. Across industries, leadership teams have embraced its broader potential, and boards, investors, and executives are already pushing organizations to adopt it across operational and security functions. Pentera’s AI Security and Exposure Report 2026 reflects that momentum: every CISO surveyed

Analysis of 216M Security Findings Shows a 4x Increase In Critical Risk (2026 Report)

14 April 2026 at 10:00
OX Security recently analyzed 216 million security findings across 250 organizations over a 90-day period. The primary takeaway: while raw alert volume grew by 52% year-over-year, prioritized critical risk grew by nearly 400%. The surge in AI-assisted development is creating a "velocity gap" where the density of high-impact vulnerabilities is scaling faster than

Your MTTD Looks Great. Your Post-Alert Gap Doesn't

13 April 2026 at 11:41
Anthropic restricted its Mythos Preview model last week after it autonomously found and exploited zero-day vulnerabilities in every major operating system and browser. Palo Alto Networks' Wendi Whitmore warned that similar capabilities are weeks or months from proliferation. CrowdStrike's 2026 Global Threat Report puts average eCrime breakout time at 29 minutes. Mandiant's M-Trends

Browser Extensions Are the New AI Consumption Channel That No One Is Talking About

10 April 2026 at 11:00
While much of the discussion on AI security centers around protecting ‘shadow’ AI and GenAI consumption, there's a wide-open window nobody's guarding: AI browser extensions.  A new report from LayerX exposes just how deep this blind spot goes, and why AI extensions may be the most dangerous AI threat surface in your network that isn't on anyone's 

The Hidden Security Risks of Shadow AI in Enterprises

9 April 2026 at 11:31
As AI tools become more accessible, employees are adopting them without formal approval from IT and security teams. While these tools may boost productivity, automate tasks, or fill gaps in existing workflows, they also operate outside the visibility of security teams, bypassing controls and creating new blind spots in what is known as shadow AI. While similar to the phenomenon of

Shrinking the IAM Attack Surface through Identity Visibility and Intelligence Platforms (IVIP)

8 April 2026 at 11:30
The Fragmented State of Modern Enterprise Identity Enterprise IAM is approaching a breaking point. As organizations scale, identity becomes increasingly fragmented across thousands of applications, decentralized teams, machine identities, and autonomous systems.  The result is Identity Dark Matter: identity activity that sits outside the visibility of centralized IAM and

[Webinar] How to Close Identity Gaps in 2026 Before AI Exploits Enterprise Risk

7 April 2026 at 12:17
In the rapid evolution of the 2026 threat landscape, a frustrating paradox has emerged for CISOs and security leaders: Identity programs are maturing, yet the risk is actually increasing. According to new research from the Ponemon Institute, hundreds of applications within the typical enterprise remain disconnected from centralized identity systems. These "dark

❌