Merriam-Websterβs word of 2025 was βslop.β Specifically, AI slop.Β
Low-effort, AI-generated content now fills social feeds, inboxes, and message threads. Much of it is harmless. Some of it is entertaining. But its growing presence is changing what people expect to see online.
McAfeeβs 2026 State of the Scamiverse report shows that scammers are increasingly using the same AI tools and techniques to make fraud feel familiar and convincing. Phishing sites look more legitimate. Messages sound more natural. Conversations unfold in ways that feel routine instead of suspicious.
According to McAfeeβs consumer survey, Americans now spend an average of 114 hours a year trying to determine whether the messages they receive are real or scams. ThatβsΒ nearly three full workweeks lost not to fraud itself, but to hesitation and doubt.
As AI-generated content becomes more common, the traditional signals people relied on to spot scams, such as strange links and awkward grammar, are fading. That shift does not mean everything online is dangerous. It means it takes more effort to tell what is real from what is malicious.
The result is growing uncertainty. And a rising cost in time, attention, and confidence.
The average American receives 14 scam messages a dayΒ
ScamsΒ are no longer occasional interruptions. They are a constant background noise.Β
According to the report,Β Americans receive an average of 14Β scamΒ messages per dayΒ across text, email, and social media.Β Β
Many of these messages do not lookΒ suspicious at first glance. They resemble routine interactions people are conditioned to respond to.Β
-
Delivery noticesΒ
-
Account verification requestsΒ
-
Subscription renewalsΒ Β
-
Job outreachΒ
-
Bank alertsΒ
-
Charity appealsΒ
And with the use of AI tools, scammers are churning out these scam messages and making them look extremely realistic.
That strategy is working.Β One in three Americans says they feel less confident spotting scams than they did a year ago.Β Β
Β
Figure 1. Types of scams reported in our consumer survey.Β
MostΒ scamsΒ move fast, and many are over in minutesΒ
The popular image ofΒ scamsΒ often involves long email threads or elaborate schemes.Β In reality, manyΒ modernΒ scamsΒ unfold quickly.Β
Among Americans who were harmed by aΒ scam,Β the typicalΒ scamΒ played out in about 38 minutes.Β Β
That speed matters. It leaves little time for reflection, verification, or second opinions. Once a person engages, scammers often escalateΒ immediately.Β
Still, some scammers play the long game with realistic romance or friendship scams that turn into crypto pitches or urgent requests for financial support. Often these scams start with no link at all, but just a familiar DM.
In fact, the report found that more than one in four suspicious social messagesΒ containΒ no link at all, removing one of the most familiar warning signs of a scam. Β And 44% of people say they have replied to a suspicious direct message without a link.Β Β

TheΒ costΒ is not just money. It is time and attention.Β
Financial losses fromΒ scamsΒ remainΒ significant.Β One in three Americans report losing money to aΒ scam.Β Among those who lost money,Β the average loss was $1,160.Β Β
But the report argues that focusing only on dollar amounts understates the broader impact: scams also cost time, attention, and emotional energy.Β
People are forced to second-guess everyday digital interactions. Opening a message. Answering a call. Scanning a QR code. Responding to a notification. That time adds up.Β
And who doesnβt know that sinking feeling when you realize a message you opened or a link you clicked wasnβt legitimate?

Figure 3. World Map of Average Scam Losses.Β
Why AI slop makesΒ scamsΒ harder to spotΒ
The rise of AI-generated content has changed the baseline of what people expect online. Itβs now an everyday part of life.
According to the report,Β Americans say they see an average of three deepfakes per day.Β Β
Most are notΒ scams. But that familiarity has consequences.Β
When AI-generated content becomes normal, it becomes harder to recognize when the same tools are being used maliciously. The report found thatΒ more than one in three Americans do not feel confident identifying deepfakeΒ scams, andΒ one in ten say they have already experienced a voice-clone scam.Β Voice clone scams often feature AI deepfake audio of public figures, or even people you know, requesting urgent financial support and compromising information.
These AI-generated scams also come in the form of phony customer support outreach, fake job opportunities and interviews, and illegitimate investment pitches.
Account takeovers are becoming routineΒ
ScamsΒ do not always end with an immediateΒ financial loss. Many are designed to gain long-term access to accounts.Β
The report found thatΒ 55% of Americans say a social media account was compromised in the past year.Β Β
Once an account is taken over, scammers can impersonate trusted contacts, spread malicious links, or harvestΒ additionalΒ personal information. The damage often extends well beyond the original interaction.Β
ScamsΒ are blending into everyday digital lifeΒ
What stands out most in the 2026 report is how thoroughlyΒ scamsΒ have blended into normal online routines.Β
ScammersΒ areΒ embedding fraud into the same systems people rely on to work, communicate, and manage their lives.Β
-
Cloud storage alerts (such as Google Drive or iCloud notices) warning that storage is full or access will be restricted unless action is taken, pushing users toward fake login pages.
-
Shared document notifications that appear to come from coworkers or collaborators, prompting recipients to open files or sign in to view a document that does not exist.
-
Payment confirmations that claim a charge has gone through, pressuring people to click or reply quickly to dispute a transaction they do not recognize.
-
Verification codes sent unexpectedly, often as part of account takeover attempts designed to trick people into sharing one-time passwords.
-
Customer support messages that impersonate trusted brands, offering help with an issue the recipient never reported.

Figure 4: Example of a cloud scam message.Β
The Key Takeaway
Not all AI-generated content is a scam. Much of what people encounter online every day is harmless, forgettable, or even entertaining. But the rapid growth of AI slop is creating a different kind of risk.
Constant exposure to synthetic images, videos, and messages is wearing down peopleβs ability to tell what is real and what is manipulated. The State of the Scamiverse report shows that consumers are already struggling with that distinction, and the data suggests the consequences are compounding. As digital noise increases, so does fatigue. And fatigue is exactly what scammers exploit.
FTC data shows losses from scams continuing to climb, and McAfee Labs is tracking a rise in fraud that blends seamlessly into everyday digital routines. Cloud storage warnings, shared document notifications, payment confirmations, verification codes, and customer support messages are increasingly being mimicked or abused by scammers because they look normal and demand quick action.
The danger of the AI slop era is not that everything online is fake. The danger is that people are being forced to question everything. That constant doubt slows judgment, erodes confidence, and creates openings for fraud to scale.
In 2026, the cost of scams is no longer measured only in dollars lost. It is measured in time, attention, and trust, and those losses are still growing.
Learn more and read the full report here.
FAQ: Understanding the AI Slop Era and Modern ScamsΒ
|
Q: What is AI slop?Β Β
A: The term refers to the flood of low-quality, AI-generated content now common online. While much of it is harmless, constant exposure can make it harder toΒ identifyΒ when similar technology is used forΒ scams.Β Β Β
|
|
Q: How much time do Americans lose toΒ scams?Β Β
A: Americans spendΒ 114 hours a yearΒ determiningΒ whether digital messages and alerts are real or fraudulent. That isΒ nearly threeΒ workweeks.Β Β Β
|
|
Q: How fast doΒ scamsΒ happen today?Β Β
A: Among people harmed byΒ scams, the typicalΒ scamΒ unfolds in aboutΒ 38 minutesΒ from first interaction to harm.Β Β Β
|
|
Q: How common are deepfakeΒ scams?Β Β
A: Americans report seeing three deepfakes per day on average, and one in ten say they have experienced a voice-cloneΒ scam.Β Β Β
|
Β
The post McAfee Report: In the AI Slop Era, Americans Spend Weeks Each Year Questioning Whatβs Real appeared first on McAfee Blog.