Job scams are on the rise. And asking the right questions can help steer you clear of them.
That rise in job scams is steep, according to the U.S. Federal Trade Commission (FTC). Recent data shows that reported losses have grown five times over between 2020 and 2024. In 2024 alone, reported losses hit half a billion dollars, with unreported losses undoubtedly pushing actual losses yet higher.
Last week, we covered how “pay to get paid” scams account for a big chunk of online job scams. Here, we’ll cover a couple more that we’ve seen circulating on social media and via texts—and how some pointed questions can help you avoid them.
Some job scammers pose as recruiters from job agencies who reach potential victims the same way legitimate agencies do—by email, text, and networking sites. Sometimes this leaves people with their guard down because it’s not unheard of at all to get contacted this way, “out of the blue” so to speak.
Yet one of the quickest ways to spot a scammer is when the “recruiter” asks to pay a fee for the matchmaking, particularly if they ask for it up front. Legitimate headhunters, temp agencies, and staffing agencies typically get paid by the company or business that ultimately does the hiring. Job candidates don’t pay a thing.
Another form of scam occurs during the “onboarding” process of the job. The scammer happily welcomes the victim to the company and then informs them that they’ll need to take some online training and perhaps buy a computer or other office equipment. Of course, the scammer asks the victim to pay for all of it—leaving the victim out of hundreds of dollars and the scammer with their payment info.
One way you can spot a job scam is to press for answers. Asking pointed questions about a company and the job it’s offering, just as you would in any real interview, can reveal gaps in a scammer’s story. In effect, scammers are putting on an acting job, and some don’t thoroughly prepare for their role. They don’t think through the details, hoping that victims will be happy enough about a job prospect to ask too many questions.
If the hiring process moves quicker than expected or details about a job seem light, it’s indeed time to ask questions. Here are a few you can keep handy when you start to wonder if you have a scam on your hands …
This is a great place to start. Legitimate employers write up job listings that they post on their website and job sites. In those descriptions, the work and everything it entails gets spelled out to the letter. A real employer should be able to provide you with a job description or at least cover it clearly over the course of a conversation.
This one can trip up a scammer quickly. A scammer might avoid giving a physical address. Likewise, they might offer up a fake one. Either a non-answer or a lie can readily call out a scam by following up the question with a web search for a physical address. (Resources like the Better Business Bureau can also help you research a company and its track record.)
Asking about co-workers, bosses, reporting structures and the like can also help sniff out a scam. Real employers, once again, will have ready answers here. They might even start dropping names and details about people’s tenure and background. Meanwhile, this is one more place where scammers might tip their hand because they haven’t made up those details.
This question alone can offer a telltale sign. Many job scams move through the hiring process at relative breakneck speed—skipping past the usual interview loops and callbacks that many legitimate jobs have. Scammers want to turn over their victims quickly, so they’ll make the “hiring process” quick as well. If it feels like you’re blazing through the steps, it could be a scam.
Every business has a story, even if it’s still in its startup days. Anyone in a recruiting or hiring position will have a good handle on this question, as they will on any follow-up questions about the company’s mission or goals. Again, vagueness in response to these kinds of questions could be a sign of a scam.
Whether it’s through social media sites like Facebook, Instagram, and the like, scammers often reach out through direct messages. Recruiters stick to legitimate business networking sites like LinkedIn. Companies maintain established accounts on recruiting platforms that people know and trust, so view any contact outside of them as suspicious.
Scammers use the “hiring process” to trick people into providing their personal info with malicious links. Web protection, included in our plans, can steer you clear of them. Likewise, our Scam Detector scans URLs in your text messages and alerts you if they’re sketchy. If you accidentally click a bad link, both web and text scam protection will block a risky site.
Many scammers get your contact info from data broker sites. McAfee’s Personal Data Cleanup scans some of the riskiest data broker sites, shows you which ones are selling your personal info, and, depending on your plan, can help you remove it. Our Social Privacy Manager lowers your public profile lower still. It helps you adjust more than 100 privacy settings across your social media accounts in just a few clicks, so your personal info is only visible to the people you want to share it with.
The post Interviewing for a Job? Spot a Scam with These Questions appeared first on McAfee Blog.
How does this job offer sound? When you pay, you get paid. Sounds fishy, right? In fact, it’s one of the fastest-growing job scams out there right now.
Looking at job scams overall, a data from the U.S. Federal Trade Commission (FTC) shows that job scam reports have nearly tripled between 2020 and 2024. Further, reported losses grew more than five times—spiking to $501 million in 2024.
In all, job scams are more common and more costly than ever.
And leading those losses is a new breed of job scam, where victims indeed “pay to get paid.”
The FTC has dubbed these “pay to get paid” scams as “gamified job scams” or “task scams.” Given the way these scams work, the naming fits. The work feels like a gamey task—and the only winner is the scammer.
It all plays out like this:
You get a job offer by text or private message. The scammer offers you “work” involving “app optimization” or “product boosting,” which they often describe in loose, hazy terms.
You accept the offer. Then the scammer sets you up with an account on an app or platform where you get tasked to “like” or “rate” sets of videos or product images online.
You get to work. The app or platform is fake, yet it looks like you’re racking up commissions as you click and complete sets of tasks. At this point the scammer might dole out a small payment or two, making you think the job truly is legit.
The scammer sets the hook. Here’s where the gamey “pay to get paid” part comes in—if you want more “work,” you must pay for it. At this point, the scammer requires a “deposit” for your next set of tasks. Like a video game, the scammer sweetens the deal by saying the next set can “level up” your earnings.
You get scammed. You make the deposit, complete the task set, and try to get your earnings from the app or platform—only to find that the scammer and your money are gone. It was all fake.
Based on what we’ve seen in the past, these scams borrow from other “easy money” con games found on payment apps. “Easy money” scams build slowly as scammers build a false sense of trust with victims by making small returns on small investments over time. Finally, with the con set, the scammer asks for a huge amount and disappears with it. “Pay to get paid” scams can work much the same way.
A few things to keep in mind about this scam as well:
Step one—ignore job offers over text and social media
A proper recruiter will reach out to you by email or via a job networking site. Moreover, they’ll give you clear details about a possible job, and they’ll answer any questions you have just as clearly.
Quite the opposite, scammers write vague texts and private messages. They’re often big on hype but short on details. Asking questions about the job will get you similarly vague answers. Ignore these offers.
Step two—look up the company
In the case of online job offers in general, look up the company. Check out their background and see if it’s an actual company—and see if that matches up with what that recruiter is telling you.
In the U.S., you have several resources that can help you answer that question. The Better Business Bureau (BBB) offers a searchable listing of businesses in the U.S., along with a brief profile, a rating, and even a list of complaints (and company responses) waged against them. Spending some time here can quickly shed light on the legitimacy of a company.
For a listing of businesses with U.S. and international locations, organizations like S&P Global Ratings and the Dun and Bradstreet Corporation can provide background info as well.
Lastly, check out the company’s website. See if it has a job listing that matches the one you’re offered. Legwork like this can help uncover a scam.
Step three—refuse to pay
As simple as it sounds, don’t pay to get paid.
Any case where you’re asked to pay to up front, with any form of payment, refuse. A legitimate employer will never ask you to invest or deposit a small amount of money with the promise of a big return. And a legitimate employer will provide you with things like training or equipment to do the job you’re qualified for.
Online protection software like ours can help keep you far safer from job scams and scams in general. Specific to job scams, here are just a few ways it can help:
The post “Pay to Get Paid” – The New Job Scam That’s Raking in Millions Right Now appeared first on McAfee Blog.
Al Roker never had a heart attack. He doesn’t have hypertension. But if you watched a recent deepfake video of him that spread across Facebook, you might think otherwise.
In a recent segment on NBC’s TODAY, Roker revealed that a fake AI-generated video was using his image and voice to promote a bogus hypertension cure—claiming, falsely, that he had suffered “a couple of heart attacks.”
“A friend of mine sent me a link and said, ‘Is this real?'” Roker told investigative correspondent Vicky Nguyen. “And I clicked on it, and all of a sudden, I see and hear myself talking about having a couple of heart attacks. I don’t have hypertension!”
The fabricated clip looked and sounded convincing enough to fool friends and family—including some of Roker’s celebrity peers. “It looks like me! I mean, I can tell that it’s not me, but to the casual viewer, Al Roker’s touting this hypertension cure… I’ve had some celebrity friends call because their parents got taken in by it.”
While Meta quickly removed the video from Facebook after being contacted by TODAY, the damage was done. The incident highlights a growing concern in the digital age: how easy it is to create—and believe—convincing deepfakes.
“We used to say, ‘Seeing is believing.’ Well, that’s kind of out the window now,” Roker said.
Al Roker isn’t the first public figure to be targeted by deepfake scams. Taylor Swift was recently featured in an AI-generated video promoting fake bakeware sales. Tom Hanks has spoken out about a fake dental plan ad that used his image without permission. Oprah, Brad Pitt, and others have faced similar exploitation.
These scams don’t just confuse viewers—they can defraud them. Criminals use the trust people place in familiar faces to promote fake products, lure them into shady investments, or steal their personal information.
“It’s frightening,” Roker told his co-anchors Craig Melvin and Dylan Dreyer. Craig added: “What’s scary is that if this is where the technology is now, then five years from now…”
Nguyen demonstrated just how simple it is to create a fake using free online tools, and brought in BrandShield CEO Yoav Keren to underscore the point: “I think this is becoming one of the biggest problems worldwide online,” Keren said. “I don’t think that the average consumer understands…and you’re starting to see more of these videos out there.”
According to McAfee’s State of the Scamiverse report, the average American sees 2.6 deepfake videos per day, with Gen Z seeing up to 3.5 daily. These scams are designed to be believable—because the technology makes it possible to copy someone’s voice, mannerisms, and expressions with frightening accuracy.
And it doesn’t just affect celebrities:
While the technology behind deepfakes is advancing, there are still ways to spot—and stop—them:
And most importantly, be skeptical of celebrity endorsements on social media. If it seems out of character or too good to be true, it probably is.
McAfee’s Deepfake Detector, powered by AMD’s Neural Processing Unit (NPU) in the new Ryzen AI 300 Series processors, identifies manipulated audio and video in real time—giving users a critical edge in spotting fakes.
This technology runs locally on your device for faster, private detection—and peace of mind.
Al Roker’s experience shows just how personal—and persuasive—deepfake scams have become. They blur the line between truth and fiction, targeting your trust in the people you admire.
With McAfee, you can fight back.
The post ‘Seeing is Believing is Out the Window’: What to Learn From the Al Roker AI Deepfake Scam appeared first on McAfee Blog.
Authored by Dexter Shin
Cybercriminals are constantly evolving their techniques to bypass security measures. Recently, the McAfee Mobile Research Team discovered malware campaigns abusing .NET MAUI, a cross-platform development framework, to evade detection. These threats disguise themselves as legitimate apps, targeting users to steal sensitive information. This blog highlights how these malware operate, their evasion techniques, and key recommendations for staying protected.
In recent years, cross-platform mobile development frameworks have grown in popularity. Many developers use tools like Flutter and React Native to build apps that work on both Android and iOS. Among these tools, Microsoft provides a framework based on C#, called Xamarin. Since Xamarin is well-known, cybercriminals sometimes use it to develop malware. We have previously found malware related to this framework. However, Microsoft ended support for Xamarin in May 2024 and introduced .NET MAUI as its replacement.
Unlike Xamarin, .NET MAUI expands platform support beyond mobile to include Windows and macOS. It also runs on .NET 6+, replacing the older .NET Standard, and introduces performance optimizations with a lightweight handler-based architecture instead of custom renderers.
As technology evolves, cybercriminals adapt as well. Reflecting this trend, we recently discovered new Android malware campaigns developed using .NET MAUI. These Apps have their core functionalities written entirely in C# and stored as blob binaries. This means that unlike traditional Android apps, their functionalities do not exist in DEX files or native libraries. However, many antivirus solutions focus on analyzing these components to detect malicious behavior. As a result, .NET MAUI can act as a type of packer, allowing malware to evade detection and remain active on devices for a long time.
In the following sections, we will introduce two Android malware campaigns that use .NET MAUI to evade detection. These threats disguise themselves as legitimate services to steal sensitive information from users. We will explore how they operate and why they pose a significant risk to mobile security.
McAfee Mobile Security already detects all of these apps as Android/FakeApp and protects users from these threats. For more information about our Mobile Product, visit McAfee Mobile Security.
While we found multiple versions of these malicious apps, the following two examples are used to demonstrate how they evade detection.
First off, where are users finding these malicious apps? Often, these apps are distributed through unofficial app stores. Users are typically directed to such stores by clicking on phishing links made available by untrusted sources on messaging groups or text messages. This is why we recommend at McAfee that users avoid clicking on untrusted links.
The first fake app we found disguises itself as IndusInd Bank, specifically targeting Indian users. When a user launches the app, it prompts them to input personal and financial details, including their name, phone number, email, date of birth, and banking information. Once the user submits this data, it is immediately sent to the attacker’s C2 (Command and Control) server.
Figure 1. Fake IndusInd Bank app’s screen requesting user information
As mentioned earlier, this is not a traditional Android malware. Unlike typical malicious apps, there are no obvious traces of harmful code in the Java or native code. Instead, the malicious code is hidden within blob files located inside the assemblies directory.
Figure 2. Blob contains malicious code
The following code snippet reveals how the app collects and transmits user data to the C2 server. Based on the code, the app structures the required information as parameters before sending it to the C2 server.
Figure 3. C# code responsible for stealing user data and sending it to the C2 server
In contrast to the first fake app, this second malware is even more difficult for security software to analyze. It specifically targets Chinese-speaking users and attempts to steal contacts, SMS messages, and photos from their devices. In China, where access to the Google Play Store is restricted, such apps are often distributed through third-party websites or alternative app stores. This allows attackers to spread their malware more easily, especially in regions with limited access to official app stores.
Figure 4. Distribution site and fake X app targeting Chinese-speaking users
One of the key techniques this malware uses to remain undetected is multi-stage dynamic loading. Instead of directly embedding its malicious payload in an easily accessible format, it encrypts and loads its DEX files in three separate stages, making analysis significantly more difficult.
In the first stage, the app’s main activity, defined in AndroidManifest.xml, decrypts an XOR-encrypted file and loads it dynamically. This initial file acts as a loader for the next stage. In the second stage, the dynamically loaded file decrypts another AES-encrypted file and loads it. This second stage still does not reveal the core malicious behavior but serves as another layer of obfuscation. Finally, in the third stage, the decrypted file contains code related to the .NET MAUI framework, which is then loaded to execute the main payload.
Figure 5. Multi-stage dynamic loading
The main payload is ultimately hidden within the C# code. When the user interacts with the app, such as pressing a button, the malware silently steals their data and sends it to the C2 server.
Figure 6. C# code responsible for stealing images, contacts, and SMS data
Beyond multi-stage dynamic loading, this malware also employs additional tricks to make analysis more difficult. One technique is manipulating the AndroidManifest.xml file by adding an excessive number of unnecessary permissions. These permissions include large amounts of meaningless, randomly generated strings, which can cause errors in certain analysis tools. This tactic helps the malware evade detection by disrupting automated scanners and static analysis.
Figure 7. AndroidManifest.xml file with excessive random permissions
Another key technique is encrypted socket communication. Instead of using standard HTTP requests, which are easier to intercept, the malware relies on TCP socket connections to transmit data. This approach makes it difficult for traditional HTTP proxy tools to capture network traffic. Additionally, the malware encrypts the data before sending it, meaning that even if the packets are intercepted, their contents remain unreadable.
One more important aspect to note is that this malware adopts various themes to attract users. In addition to the fake X app, we also discovered several dating apps that use the same techniques. These apps had different background images but shared the same structure and functionality, indicating that they were likely created by the same developer as the fake X app. The continuous emergence of similar apps suggests that this malware is being widely distributed among Chinese-speaking users.
Figure 8. Various fake apps using the same technique
The rise of .NET MAUI-based malware highlights how cybercriminals are evolving their techniques to avoid detection. Some of the techniques described include:
With these evasion techniques, the threats can remain hidden for long periods, making analysis and detection significantly more challenging. Furthermore, the discovery of multiple variants using the same core techniques suggests that this type of malware is becoming increasingly common.
Users should always be cautious when downloading and installing apps from unofficial sources, as these platforms are often exploited by attackers to distribute malware. This is especially concerning in countries like China, where access to official app stores is restricted, making users more vulnerable to such threats.
To keep up with the rapid evolution of cybercriminal tactics, users are strongly advised to install security software on their devices and keep it up to date at all times. Staying vigilant and ensuring that security measures are in place can help protect against emerging threats. By using McAfee Mobile Security, users can enhance their device protection and detect threats related to this type of malware in real-time.
Glossary of Terms
Indicators of Compromise (IOCs)
APKs:
C2:
The post New Android Malware Campaigns Evading Detection Using Cross-Platform Framework .NET MAUI appeared first on McAfee Blog.
It’s the month of top seeds, big upsets, and Cinderella runs by the underdogs. With March Madness basketball cranking up, a fair share of online betting will be sure to follow—along with online betting scams.
Since a U.S. Supreme Court ruling in 2018, individual states can determine their own laws for sports betting. Soon after, states leaped at the opportunity to legalize it in some form or other. Today, nearly 40 states and the District of Columbia have “live and legal” sports betting, meaning that people can bet on single-game sports through a retail or online sportsbook or a combination of the two in their state.
And it has made billions of dollars for the government.
If you’re a sports fan, this news has probably been hard to miss. Or at least the outcome of it all has been hard to miss. Commercials and signage in and around games promote several major online betting platforms. Ads have naturally made their way online too, complete with all kinds of promo offers to encourage people to get in on the action. However, that’s also opened the door for scammers who’re looking to take advantage of people looking to make a bet online, according to the Better Business Bureau (BBB). Often through shady or outright phony betting sites.
Let’s take a look at the online sports betting landscape, some of the scams that are cropping up, and some things you can do to make a safer bet this March or any time.
Among the 30 states that have “live and legal” sports betting, 19 offer online betting, a number that will likely grow given various state legislation that’s either been introduced or will be introduced soon.
If you’re curious about what’s available in your state, this interactive map shows the status of sports betting on a state-by-state level. Further, clicking on an individual state on the map will give you yet more specifics, such as the names of retail sportsbooks and online betting services that are legal in the state. For anyone looking to place a bet, this is a good place to start. It’s also helpful for people who are looking to get into online sports betting for the first time, as this is the sort of homework that the BBB advises people to do before placing a sports bet online. In their words, you can consider these sportsbooks to be “white-labeled” by your state’s gaming commission.
However, the BBB stresses that people should be aware that the terms and conditions associated with online sports betting will vary from service to service, as will the promotions that they offer. The BBB accordingly advises people to closely read these terms, conditions and offers. For one, “Gambling companies can restrict a user’s activity,” meaning that they can freeze accounts and the funds associated with them based on their terms and conditions. Also, the BBB cautions people about those promo offers that are often heavily advertised, “[L]ike any sales pitch, these can be deceptive. Be sure to read the fine print carefully.”
Where do scammers enter the mix? The BBB points to the rise of consumer complaints around bogus betting sites:
“You place a bet, and, at first, everything seems normal. But as soon as you try to cash out your winnings, you find you can’t withdraw a cent. Scammers will make up various excuses. For example, they may claim technical issues or insist on additional identity verification. In other cases, they may require you to deposit even more money before you can withdraw your winnings. Whatever you do, you’ll never be able to get your money off the site. And any personal information you shared is now in the hands of scam artists.”
If there’s a good reason you should stick to the “white labeled” sites that are approved by your state’s gaming commission, this is it. Take a pass on any online ads that promote betting sites, particularly if they roll out big and almost too-good-to-be-true offers. These may lead you to shady or bogus sites. Instead, visit the ones that are approved in your state by typing in their address directly into your browser.
In addition to what we mentioned above, there are several other things you can do to make your betting safer.
In addition to choosing a state-approved option, check out the organization’s BBB listing at BBB.org. Here you can get a snapshot of customer ratings, complaints registered against the organization, and the organization’s response to the complaints, along with its BBB rating, if it has one. Doing a little reading here can be enlightening, giving you a sense of what issues arise and how the organization has historically addressed them. For example, you may see a common complaint and how it’s commonly resolved. You may also see where the organization has simply chosen not to respond, all of which can shape your decision whether to bet with them or not.
Credit cards are a good way to go. One reason why is the Fair Credit Billing Act, which offers protection against fraudulent charges on credit cards by giving you the right to dispute charges over $50 for goods and services that were never delivered or otherwise billed incorrectly. Your credit card companies may have their own policies that improve upon the Fair Credit Billing Act as well. Debit cards don’t get the same protection under the Act.
Comprehensive online protection software will defend you against the latest virus, malware, spyware, and ransomware attacks plus further protect your privacy and identity. In addition to this, it can also provide strong password protection by generating and automatically storing complex passwords to keep your credentials safer from hackers and crooks who may try to force their way into your accounts. And, specific to betting sites, online protection can help prevent you from clicking links to known or suspected malicious sites.
With online betting cropping up in more and more states for more and more people, awareness of how it works and how scammers have set up their presence within it becomes increasingly important. Research is key, such as knowing who the state-approved sportsbooks and services are, what types of betting are allowed, and where. By sticking to these white-label offerings and reading the fine print in terms, conditions, and promo offers, people can make online betting safer and more enjoyable.
Editor’s Note: If gambling is a problem for you or someone you know, you can seek assistance from a qualified service or professional. Several states have their own helplines, and nationally you can reach out to resources like http://www.gamblersanonymous.org/ or https://www.ncpgambling.org/help-treatment/.
The post How to Protect Yourself from March Madness Scams appeared first on McAfee Blog.
Authored by Aayush Tyagi and M, Mohanasundaram
*Bold = Term Defined in Appendix
In this blog, we discuss how malware authors recently utilized a popular new trend to entice unsuspecting users into installing malware. This blog is meant as a reminder to stay cautious during a hype cycle. It’s a common trap and pitfall for unassuming consumers.
Figure 1: DeepSeek Google Search Trend from 1st January to 7th March
Malware creators frequently exploit trending search terms through hashtags and SEO manipulation to boost visibility and climb search rankings. This tactic, known as SEO poisoning, helps drive traffic to malicious sites, increasing downloads or earning rewards through affiliate programs. Recently, “AI” (Artificial Intelligence) has been one of the most popular keywords leveraged in these scams. Earlier this year, “DeepSeek” also gained traction, even surpassing “Nvidia” at its peak in search interest.
Let’s look at how we got here. Artificial Intelligence (AI) tools are transforming the world at an unprecedented pace, right before our eyes. In recent years, we’ve witnessed remarkable advancements in Generative AI, from the development of highly successful frontier of LLM’s (Large Language Models) such as ChatGPT, Gemini, LLaMA, Grok, etc., to their applications as coding assistants (GitHub Co-pilot or Tabnine), meeting assistants, and voice cloning software among the more popular ones.
These tools are pervasive and easily available at your fingertips. In today’s world AI isn’t just a complicated term utilized by select organizations, it’s now adopted by every household in one way or another and is reshaping entire industries and economies.
With the good comes the bad, and unfortunately AI has enabled an accelerated ecosystem of scammers adopting these tools – examples are:
Besides the application of AI tools that empower scammers, there is the good old use case of piggybacking on popular news trends, where popular search terms are used to bait gullible users (read our blog on how game cracks are used as lures to deliver malware). One such popular news-worthy term that is being abused is DeepSeek, which McAfee discussed early this year.
The launch of the DeepSeek-R1 model (by DeepSeek, a Chinese company) generated significant buzz. The model is claimed to have been innovated so that the cost of building and using the technology is a fraction1 of the cost compared to other Generative AI models such as OpenAI’s GPT-4o or Meta’s Llama 3.1. Moreover, the R1 model was released in January 2025 under an Open-Source license.
Within a few days of the release of the DeepSeek-R1 model, the Deepseek AI assistant—a chatbot for the R1 model—was launched on the Apple App Store and later the Google Play Store. In both app stores, Deepseek’s chatbot, which is an alternative to OpenAI’s ChatGPT, took the No. 1 spot and has been downloaded over 30 million times.
This stirred up the curiosity of many who wanted to experiment with the model. The interest spiked to a point where the DeepSeek website wasn’t available at times due to the sheer volume of people trying to set up accounts or download their app. This sense of excitement, anxiety, and impatience is exactly what scammers look for in their victims. It wasn’t shortly after the term went “viral” that scammers saw an opportunity and began cloaking malware disguised as DeepSeek. Various malware campaigns followed, which included Crypto-miners, fake installers, DeepSeek impersonator websites, and fake DeepSeek mobile apps.
At McAfee Labs, we work hard to keep you safe, but staying informed is always a smart move. When navigating trending news stories, it’s important to stay cautious and take necessary precautions. We continuously track emerging threats across multiple platforms—including Windows, macOS, Android, iOS, and ChromeOS—to ensure our customers remain protected. While we do our part, don’t forget to do yours: enable Scam Protection, Web Protection, and Antivirus in your preferred security product.
McAfee products offer advanced AI-powered protection across all tiers—Basic, Essential, Premium, Advanced, and Ultimate. Our AI-Suite includes features like AI-powered Antivirus, Text Scam Detection, Web Protection, VPN, and Identity Protection, providing comprehensive security.
Check out McAfee Scam Detector, which enhances our ability to combat a wide range of scams and is included in our products at no extra cost.
For more tips on avoiding scams and staying safe online, visit the McAfee Smart AI Hub at mcafee.ai. You can also explore the latest insights on the State of the Scamiverse on McAfee’s blog and stay up to date on scam prevention strategies.
Together, we can outsmart scammers and make the internet safer for everyone.
In the rest of this article, we use simple examples to delve into more technical details for those seeking more analysis details.
McAfee Labs uncovered a variety of DeepSeek-themed malware campaigns attempting to exploit its popularity and target tech savvy users. Multiple malware families were able to distribute their latest variants under the false pretense of being DeepSeek software.
Figure 2: Attack Vector
Users encounter some threats while searching for information about DeepSeek AI on the internet. They encountered websites offering DeepSeek installers for different platforms, such as Android, Windows and Mac. McAfee Labs found a number of such installers were trojanized or just repackaged applications. We identified multiple instances of Keyloggers, Crypto miners, Password Stealers, and Trojan Downloaders being distributed as DeepSeek installers.
Figure 3: DeepSeek Installers
In Figure 3, we encountered fake installers, which distribute Third-Party software, such as winManager (highlighted in red) and Audacity (highlighted in blue).
In the simplest abuse of the DeepSeek name, certain affiliates were able to spike their partner downloads and get a commission based on pay-per-install partner programs. Rogue affiliates use this tactic to generate revenue through forced installations of partner programs.
Additionally similar software installers were also observed utilizing the DeepSeek Icon to appear more believable or alternatively use click ads and modify browser settings (such as modify the search engine) with the goal of generating additional ad revenue.
Figure 4: winManager (left) and Audacity (right)
The Deepseek icon was also misused by multiple Android applications to deceive users into downloading unrelated apps, thereby increasing download counts and generating revenue.
Figure 5: Android files abusing DeepSeek’s Logo
We also encountered DeepSeek-Themed Fake-Captcha Pages. This isn’t new and has been a popular technique used as recently as 6 months ago by LummaStealer
Fake captcha – is a fake webpage, asking users to verify that they are human, but instead, tricks the user into downloading and executing malicious software. This malware can steal login credentials, browser information etc.
Figure6: Fake Captcha Page
In this instance, the website deepseekcaptcha[.]top pretends to offer a partnership program for content creators. They are utilizing the technique called ‘Brand Impersonation’, where they’re using DeepSeek’s Icons and color scheme to appear as the original website.
Figure 7: deepseekcaptcha[.]top
Once the user registers for the program, they’re redirected to the fake captcha page.
Figure 8: Fake Captcha Page hosted on the website
Here, as shown above, to authenticate, the user is asked to open the verification window by pressing the Windows + R key and then pressing CTRL + V to verify their identity.
The user would observe a screen as shown in figure 9.
Figure 9: Windows Run panel after copying the CMD
On clicking ‘OK’, malware will be installed that can steal browser and financial information from the system.
McAfee’s Web Advisor protects against such threats. In this instance, the fake captcha page was blocked and marked as suspicious before it could be accessed. Even if you aren’t a McAfee customer, check out browser plugin for free.
Figure 10: McAfee blocking malicious URL
In this section we talk about a *Cryptominer malware that was masquerading as DeepSeek. By blocking this initial payload, we prevent a chain of events (Fig 11.) on the computer that would have led to reduced performance on the device and potentially expose your device to further infection attempts.
Some examples names used by the initial loader are were:
Figure 11: CryptoMiner KillChain
Once installed, this malware communicates with its *C&C (Command and Control) to download and execute a *PowerShell script. Figure 12 (a) and (b) show the malware connecting it’s IP address to download chunks of a script file which is then stored to the AppData\Roaming folder as installer.ps1
Figure 12(a): Sample connects to C&C IP Address
Figure 12(b): Installer.ps1 stored in Roaming folder
An attempt is made to bypass system policies and launch the script
Figure 13: Base64 Encoded Malicious Code
Figure 14: PowerShell code for Process Injection.
Malware attempts to maintain persistence on the Victim’s computer.
Figure 15: Creating Run Key entry to maintain persistence
Figure 16: HTTP response that contains additional parameters
[{“address”:”494k9WqKJKFGDoD9MfnAcjEDcrHMmMNJTUun8rYFRYyPHyoHMJf5sesH79UoM8VfoGYevyzthG86r5BTGYZxmhENTzKajL3″,”idle_threads”:90,”idle_time”:1,”password”:”x”,”pool”:”pool.hashvault.pro:443″,”task”:”FALLEN|NOTASK”,”threads”:40}]
Figure 17: Notepad.exe being executed with additional parameters
Figure 18: Wallet status for the captured wallet address
The attacker purposely mines Monero Cryptocurrency, as it prioritizes anonymity, making it impossible to track the movements of funds. This makes it a popular coin by a number of crypto-miners
PowerShell is a cross-platform command-line shell and scripting language developed by Microsoft, primarily used for task automation and configuration management and streamlined administrative control across Windows, Linux, and macOS environments worldwide.
A cryptominer is software or hardware that uses computing power to validate cryptocurrency transactions, secure decentralized networks, and earn digital currency rewards, often straining system resources and raising energy consumption. When used in the context of malware, it is unauthorized software that covertly uses infected devices to mine cryptocurrency, draining resources, slowing performance, increasing energy costs, and often remaining difficult to detect or remove.
This is a term used to describe a technique where malware injects and overwrites legitimate processes in memory, thereby modifying their behavior to run malicious code and bypassing security measures. The target processes are typically trusted processes.
C&C (Command and Control) is a communication channel used by attackers to remotely issue commands, coordinate activities, and data from compromised systems or networks.
This term refers to the techniques that malware or an attacker uses to maintain long-term access to a compromised system, even after reboots, logouts, or security interventions. Persistence ensures that the malicious payload or backdoor remains active and ready to execute even if the system is restarted or the user tries to remove it.
In malware, a payload is the main malicious component delivered or executed once the infection occurs, enabling destructive activities such as data theft, system damage, resource hogging or unauthorized control and infiltration.
XMRig is an open-source cryptocurrency mining software primarily used for mining Monero. It was originally developed as a legitimate tool for miners to efficiently utilize system resources to mine Monero using CPU and GPU power. However, due to its open-source nature and effectiveness, XMRig has become a popular tool for cryptominers.
Monero (XMR) is a privacy-focused cryptocurrency that prioritizes anonymity, security, and decentralization. Launched in April 2014, Monero is designed to provide untraceable and unlinkable transactions, making it difficult for outside parties to monitor or track the movement of funds on its blockchain. It operates on a decentralized, peer-to-peer network but with enhanced privacy features.
The post Look Before You Leap: Imposter DeepSeek Software Seek Gullible Users appeared first on McAfee Blog.
In a digital landscape hungry for the next big thing in Artificial Intelligence, a new contender called DeepSeek recently burst onto the scene and has quickly gained traction for its advanced language models.
Positioned as a low-cost alternative to industry giants like OpenAI and Meta, DeepSeek has drawn attention for its rapid growth, affordability, and potential to reshape the AI landscape.
Unfortunately, a recent investigation by McAfee Labs found that the same hype is now fueling a barrage of malware attacks disguised as DeepSeek software and updates.
Here’s a breakdown of those research findings:
It starts with a user searching online to find DeepSeek to use for themselves. Innocent enough. The problem comes from malicious results that promise access to DeepSeek, but actually steal data and infect computers.
McAfee Labs’ blog post pulls back the curtain on three main deception methods:
1. Fake “DeepSeek” Installers
2. Unrelated Third-Party Software Installs
3. Fake Captcha Pages
McAfee’s experts underscore the importance of careful online habits and shares best practices to keep threats at bay:
Windows + R
and paste something you can’t see in full, don’t do it.
McAfee Labs’ findings reveal just how adaptable—and opportunistic—cybercriminals can be when fresh digital gold rushes emerge. By following basic security practices and staying skeptical about anything that seems too good to be true, you can explore new AI frontiers without handing over the keys to your device.
When in doubt, stop, do your due diligence, and only download from verified sources. Your curiosity about the latest tech trends shouldn’t come at the cost of your personal data or system security.
The post Bogus ‘DeepSeek’ AI Installers Are Infecting Devices with Malware, Research Finds appeared first on McAfee Blog.
Tax season is already stressful for many Americans, and to make matters worse, it’s also a golden opportunity for scammers.
According to a new 2025 tax season survey conducted by McAfee, nearly half (48%) of people say they, or someone they know, has received a message via email, social media, phone call, or text message falsely claiming to be from the IRS or an official state tax authority.
And when these deceptive messages and other manipulative AI practices work, research reveals it costs — a lot.
Gen Z adults (18-24) surveyed by McAfee reported experiencing the most scams, with nearly 40% saying they or someone they know has been scammed.
While young adults face high rates of attempted fraud, older adults (65-74) are still at greater risk of large financial losses. Among men in that age group who lost money in such a scam, 40% reported losing between $751 and $1,000, and half of the women lost between $2,501 and $5,000.
Meanwhile, the steepest losses overall were reported by those aged 45-54, with 10% saying they lost more than $10,000.
Criminals have long relied on phishing emails and fraudulent calls to obtain personal information—especially during tax season. Today, AI is raising the stakes.
Deepfake audio lets scammers sound exactly like IRS agents, and AI-generated phishing emails perfectly replicate official communications from reputable tax preparation services.
In fact, more than half (55%) of Americans say they’ve noticed scam attempts becoming more realistic than in previous years, and 87% worry AI is making them even harder to detect.
Here’s how a typical tax scam might play out: It often starts with an urgent text or email claiming your refund was rejected—or that you owe back taxes and must pay immediately. These messages can look and sound incredibly convincing, prompting recipients to click a malicious link or call a fake helpline.
Once scammers have your attention, they’ll ask for personal or financial information—like your Social Security number, bank details, or a credit card—to “fix” the supposed problem. Of course, it’s all a ploy to steal your identity or your cash.
McAfee highlights several tactics that have emerged in these AI-driven scams:
Tax scams show no signs of slowing down in 2025. Whether you’re part of Gen Z, a senior, or somewhere in between, it pays to stay vigilant.
By recognizing the signs of a scam, safeguarding your personal information, and taking proactive steps, you can help ensure your refund ends up where it belongs: in your pocket.
The post Financial Losses from Tax Scams Top $1,000 on Average—and Gen Z is a Growing Target appeared first on McAfee Blog.
Scams are big business for cybercriminals, and they’re getting more sophisticated than ever. According to McAfee’s State of the Scamiverse 2025 report, the average person encounters 12 scams per day, while Americans see over 14 scam attempts daily, including three deepfake videos.
Fraudsters are leveraging AI-powered tools to create hyper-realistic deepfakes for as little as $5 and 10 minutes, making it harder than ever to distinguish between what’s real and what’s fake. The financial impact is staggering—87% of scam victims lose money, with one-third losing over $500, and nearly one in ten losing more than $5,000.
As a parent, one of my greatest concerns is ensuring my family doesn’t fall victim to these evolving scams.
So, here are five key ways to keep your loved ones safe in today’s Scamiverse.
Teaching kids (and adults) to be skeptical of what they see online is a crucial first step in scam prevention. Given the rise of deepfakes and AI-generated frauds, it’s essential to develop a questioning mindset:
With detected deepfakes surging tenfold globally and a 1,740% increase in North America alone, it’s more important than ever to show real-world examples of scams to kids and teens so they can recognize the signs.
Good digital habits can prevent many scams before they happen. Yet, 35% of scam victims say falling for a scam caused them moderate to significant distress, highlighting the importance of strong cyber hygiene:
Cybercriminals use the mosaic effect—piecing together publicly available information—to commit identity theft and financial fraud. Here’s how to lock down your digital footprint:
Phishing scams remain one of the most successful fraud tactics, often tricking victims into clicking on malicious links. According to McAfee, the most commonly reported scam types include:
To stay safe:
Staying informed is one of the best defenses against scams. With social media users sharing over 500,000 deepfakes in 2023, awareness is key. Here’s how to stay ahead:
Whether it’s deepfake impersonation scams, fraudulent investment schemes, or phishing texts, scammers are evolving rapidly. But with awareness, skepticism, and strong digital habits, you can help ensure your family stays protected from the ever-growing Scamiverse.
For more tips and security solutions, check out McAfee’s advanced protection tools to stay one step ahead of the fraudsters.
The post Protect Your Family From Scams With These 5 Key Online Safety Tips appeared first on McAfee Blog.
The internet is brimming with content designed to entertain, inform—and sometimes deceive. The latest tool in a cybercriminal’s arsenal? Deepfakes. From fabricated celebrity endorsements to fraudulent job interviews, AI-generated deepfake scams are growing at an alarming rate. As deepfake technology becomes more advanced, it’s harder than ever to discern real from fake—until it’s too late.
According to McAfee’s latest “State of the Scamiverse” report, deepfake scams have become an everyday reality. The average American now encounters 2.6 deepfake videos daily, with younger adults (18-24) seeing even more – about 3.5 per day. And for less than the cost of a latte and in under 10 minutes, scammers today can create shockingly convincing deepfake videos of anyone: your mom, your boss, or even your child.
At McAfee, we’re committed to helping users navigate this evolving threat landscape with cutting-edge protection tools. Understanding how deepfake scams work and how to safeguard yourself is the first step in staying ahead of cybercriminals.
Deepfake scams exploit the power of AI to create hyper-realistic audio, video, and images that can impersonate anyone—from politicians to CEOs, from family members to Hollywood stars. These fake videos and voices have been used to:
Our research shows that people encounter nearly three deepfakes a day online and that the number is growing, making the urgency to combat these scams greater than ever.
Figure 1: AN AI-Generated image of the Pope went viral online.
Deepfake scams typically follow a predictable pattern:
While deepfake technology is becoming increasingly sophisticated, there are still ways to identify AI-generated deception:
To stay one step ahead of cybercriminals, consider these safety measures:
Deepfake scams are not just a futuristic concern—they are a real and present danger. Cybercriminals will continue refining their tactics, but with the right awareness and security tools, you can outsmart them.
McAfee remains at the forefront of AI-driven security solutions, ensuring you have the protection you need in an increasingly deceptive digital world.
Stay one step ahead of deepfake threats. Download McAfee+ today and take control of your online security.
The post Data Shows You’ll Encounter A Deepfake Today—Here’s How To Recognize It appeared first on McAfee Blog.
Look both ways for a new form of scam that’s on the rise, especially if you live in Dallas, Atlanta, Los Angeles, Chicago, or Orlando — fake toll road scams. They’re the top five cities getting targeted by scammers.
We’ve uncovered plenty of these scams, and our research team at McAfee Labs has revealed a major uptick in them over the past few weeks. Fake toll road scams have nearly quadrupled at the end of February compared to where they were in January.
Figure 1. A chart showing the increasing frequency and volume of toll road scam messages
The scams play out like this:
Ping. You get a text notification. It says you have an unpaid tab for tolls and that you need to pay right away. And like many scams, it contains a link where you can pay up. Of course, that takes you to a phishing site that asks for your payment info (and sometimes your driver’s license number or even your Social Security number), which can lead to identity fraud and possibly identity theft.
Here’s one example that our Labs team tracked down. Pay close attention to the link. It follows the form of a classic scammer trick by altering the address of a known company so that it looks legit.
Figure 2. A screenshot showing an example of a Toll Roads scam text
The scam messages come in multiple varieties, however, so it’s important to stay vigilant of both your text and email inboxes. McAfee Labs found, for example, that some text messages and emails included PDFs while others included links using popular URL shortener services such as bit.ly, shorturl.at, qrco.de, and short.gy. The use of URL shorteners can also falsely create a sense of security when people recognize the popular format and don’t see typos or suspicious parts of the full URL.
Figure 3. A screenshot of a toll road scam text that urges recipients to open a PDF
Additionally, these scammers put in a lot of effort to create legitimate-looking web pages and notices. Note how the following example does its best to look like branded digital letterhead. And, as usual, it uses urgent language about fines and legal action to help make sure you “Pay Now.”
Figure 4. An example of a PDF included in a scam toll road text message
They work. Scammers target their victims by matching them with the toll payment service in their city or state, which makes the scam look extra official. For example, a scammer would use an “E-ZPass” email to target someone in Orlando, our #5 city for toll road scams, which is one of the 19 states that E-ZPass serves. In southern California, victims get hit with phony texts from scammers posing as “The Toll Roads,” which is a payment service in that region.
The apparent legitimacy combined with the emotional sense of urgency creates the perfect snare for scammers.
Now, about those URLs to phishing sites. We mentioned that scammers take the URLs of known toll payment services and add some extra characters to them. In other cases, they’ve latched on to the root term “paytoll” as well. Our research team dug up several examples of fake toll sites, including:
Of course, don’t follow any of those links. And something else about those links — you can see scammers dot-top, dot-vip, and dot-xin. These domains are cheap, available, and easy to purchase, which makes them attractive to scammers.
According to McAfee Labs research, the following U.S. cities are experiencing the most of these scam texts:
Figure 5. The top cities where toll road scams are most prevalent
The scam has gotten so out of hand that the U.S. Federal Trade Commission (FTC) has issued a warning about it. They offer up the following advice:
We’ll add to that too, with:
The following images show additional phishing pages and links McAfee found in relation to different toll road scams.
The post Fake Toll Road Scam Texts are Everywhere. These Cities are The Most Targeted. appeared first on McAfee Blog.
Authored By Sakshi Jaiswal
McAfee Labs recently observed a surge in phishing campaigns that use fake viral video links to trick users into downloading malware. The attack relies on social engineering, redirecting victims through multiple malicious websites before delivering the payload. Users are enticed with promises of exclusive content, ultimately leading them to fraudulent pages and deceptive download links.
Figure 1: Geo Heatmap showing McAfee customer encounters over the past 3 weeks.
1. Upon executing the PDF file, the displayed page appears to be part of a phishing scam leveraging clickbait about a “viral video” to lure users into clicking suspicious links. The document contains blue hyperlinked text labeled as “Watch ➤ Click Here To Link (Full Viral Video Link)” and a deceptive video player graphic, giving the illusion of a playable video.
Figure 2: PDF Image
2. The user clicks on “Watch ➤ Click Here To Link (Full Viral Video Link)“, which redirects them to a webpage (gitb.org) displaying fake “viral video leaked” content, excessive ads, and fake notifications to lure users. It promotes adult content, gambling, and misleading download buttons, which are common indicators of phishing or malware traps.
Figure 3: Redirected Webpage
3. This further redirects to malicious URL “hxxps[:]//purecopperapp.monster/indexind.php?flow_id=107&aff_click_id=D-21356743-1737975550-34G123G137G124-AITLS2195&keyword=Yourfile&ip=115.118.240.109&sub=22697121&source=157764”
Figure 4: Redirected Webpage2
4. And then redirected to below URL: “hxxps[:]//savetitaniumapp.monster/?t=d6ebff4d554677320244f60589926b97” which presents a password-protected download link hosted on Mega.nz, requiring the user to manually copy and paste the URL.
Figure 5: Redirected Webpage with download link
5. Upon checking the URL, it displays a loading screen while preparing the malicious file for download and then shows a downloadable file named 91.78.127.175.zip with a size of 26.7 MB.
Figure 6: Screenshot of a ZIP file download from MEGA
6. Download is completed and stored in downloads folder
Figure 7: Zip file downloaded
7. A ZIP archive (91.78.127.175.zip, 26.7 MB) file contains a password protected .7z file with .png file containing the password.
Figure 8: Files inside ZIP archive
8. The extracted .7z archive contains setup.msi, which is the actual malware payload.
Figure 9: setup.msi file
Upon execution of setup.msi, the malware:
1. Displays a CAPTCHA image to deceive users. upon clicking “OK,” it begins dropping files in the %Roaming% directory.
Figure 10: Screenshot of CAPTCHA image
2. Drops files into the %Roaming% directory.
Figure 11: Dropped multiple files in %Roaming%
Figure 12: Process Tree
McAfee intercepts and blocks this infection chain at multiple stages.
URL blocking of the fake video pages.
Figure 13: McAfee Blocking URL
Figure 14: McAfee PDF file Detection
This campaign highlights how cybercriminals exploit social engineering tactics and clickbait content to distribute malware. Users should remain cautious when encountering suspicious video links. To stay protected against phishing attacks and malware infections, McAfee recommends:
The post The Dark Side of Clickbait: How Fake Video Links Deliver Malware appeared first on McAfee Blog.
Social media connects us to friends, trends, and news in real time—but it also opens the door to scammers looking to exploit trust and curiosity. From fake giveaways to impersonation scams, fraudsters use sophisticated tactics to trick users into handing over personal information, money, or access to their accounts.
Even the most internet-savvy users can fall victim to these deceptive schemes. That’s why it’s crucial to recognize the red flags before it’s too late. Whether it’s a DM from a “friend” in trouble, a deal that seems too good to be true, or a sudden request to verify your account, scammers prey on urgency and emotion to pull you in.
Here’s a look at some of the most common social media scams—and how you can stay one step ahead to protect yourself and your accounts.
Fraudsters use various tactics to lure unsuspecting users into their schemes, including:
Recognizing these red flags can help you stay safe:
Follow these precautions to reduce your risk of falling victim:
If you suspect you’ve fallen victim to a social media scam, take immediate action:
Social media scams are becoming more sophisticated, but you can protect yourself by staying informed and cautious.
Always verify messages, be skeptical of too-good-to-be-true offers, and use strong security measures to safeguard your accounts.
By recognizing these scams early, you can avoid financial loss and keep your personal information safe online.
McAfee helps protect you from online threats with advanced security tools, including identity monitoring, safe browsing features, and real-time malware protection. Stay one step ahead of scammers with trusted cybersecurity solutions.
The post The 9 Most Common Social Media Scams—and How to Spot Them Before It’s Too Late appeared first on McAfee Blog.
Typos. Too-good-to-be-true offers. Urgent warnings.
Scammers are getting smarter—and more convincing. New research from the Federal Trade Commission (FTC) reveals that Americans lost a staggering $12.5 billion to fraud in 2024, a 25% increase from the previous year. The median reported loss was $497, with imposter scams alone accounting for nearly $3 billion in losses.
Fraud isn’t just increasing—it’s hitting certain areas harder than others. Florida, Georgia, and Delaware ranked as the top three states with the highest per-capita fraud reports, while California led in total reports with over 500,000 cases.
And where are these scams happening? Scammers are reaching victims through phone calls, text messages, and social media, with social media emerging as one of the most lucrative platforms for fraud—70% of fraud reports linked to social media resulted in financial losses.
With scammers using increasingly sophisticated tactics, knowing how to spot red flags in emails and links is more critical than ever.
Here’s how to protect yourself from the latest phishing threats.
Simple Steps to Check a Link Before Clicking
How to Protect Yourself from Phishing Attacks
Preventative Measures
What to Do if You Clicked a Suspicious Link
Phishing attacks are becoming more deceptive, but staying informed and cautious can protect you. Always verify links and emails before clicking, and use trusted cybersecurity tools like McAfee+ to keep your accounts and data safe.
Stay vigilant—don’t let scammers catch you off guard!
The post Avoid Being Scammed: How to Identify Fake Emails and Suspicious Links appeared first on McAfee Blog.
Cryptocurrency offers exciting opportunities—but it’s also a favorite playground for scammers.
With the rapid rise of deepfake technology and deceptive AI-driven schemes, even seasoned investors can fall victim to fraud. According to McAfee’s State of the Scamiverse report, deepfake scams are on the rise, with the average American now encountering 2.6 deepfake videos daily. And younger adults (18-24) see even more – about 3.5 per day.
From fake investment opportunities to phishing attempts, bad actors are more sophisticated than ever.
The recent wave of Trump-themed meme coins—more than 700 copycats attempting to mimic the real thing—highlights just how rampant crypto scams have become. If even the president’s cryptocurrency isn’t safe from impersonators, how can everyday investors protect themselves?
By knowing the red flags, you can safeguard your money and personal data from crypto scammers.
Scammers often lure victims with guaranteed returns or impossibly high profits. If an investment promises “risk-free” earnings or sounds like a financial miracle, run the other way—legitimate investments always carry some level of risk.
Example: A Ponzi scheme disguised as a crypto investment fund may claim to offer “10% daily profits” or “instant payouts.” In reality, they use new investors’ money to pay early participants—until the scam collapses.
Fraudsters frequently impersonate public figures—from Elon Musk to Donald Trump—to promote fake coins or crypto investments. The explosion of Trump-themed meme coins shows how easily scammers exploit famous names. Even if a project appears linked to a well-known figure, verify through official channels.
Example: A deepfake video featuring a celebrity “endorsing” a new crypto token. McAfee’s research found that nearly 3 deepfake videos per day are encountered by the average American, many of them tied to scams.
Scammers often set up fraudulent crypto exchanges or wallet apps that look legitimate but are designed to steal your money. They might advertise low fees, special bonuses, or exclusive access to new coins.
How to Protect Yourself:
Always use well-established exchanges with a proven track record.
Look for HTTPS encryption and verify the URL carefully.
Research if the platform is licensed and regulated.
Scammers thrive on urgency. They’ll push you to act immediately before you have time to think critically. Whether it’s a limited-time pre-sale or a “secret investment opportunity,” don’t let fear of missing out (FOMO) cloud your judgment.
Example: “Only 10 spots left! Invest now before prices skyrocket!”—Classic scam tactics designed to trigger impulsive decisions.
No legitimate crypto project will ever ask for:
Example: A fake customer support email pretending to be from Coinbase, asking you to confirm your wallet password—this is a phishing attempt!
Do Your Research: Always Google the project’s name + “scam” before investing.
Check Regulatory Status: See if the platform is licensed (DFPI, SEC, or other regulators).
Verify Official Websites & Socials: Scammers create lookalike websites with small typos—double-check URLs!
Use Cold Storage: Store your assets in a hardware wallet to protect against hacks.
Use tools like McAfee+: To monitor for potential scams and get warnings for potential deepfakes and other scam red flags.
Crypto offers incredible potential—but with great opportunity comes risk. Scammers are always evolving, using deepfake videos, phishing, and fraudulent investment schemes to trick even the savviest investors. By staying informed and following basic security practices, you can avoid getting caught in the next big crypto scam.
The post How to Spot a Crypto Scam: The Top Red Flags to Watch For appeared first on McAfee Blog.
Cybercriminals will always try to cash in on a good thing, and football is no exception. Online scammers are ramping up for the big game with all types of schemes designed to rip you off and steal your personal info—but you have several ways you can beat them at their game.
Like shopping holidays, tax season, and even back-to-school time, scammers take advantage of annual events that get people searching for deals and information online. You can include big games and tournaments in that list too.
Specific to this big game, you can count on several types of scams to rear their heads this time of year—ticket scams, merchandise scams, betting scams, and phony sweepstakes as well. They’re all in the mix, and they’re all avoidable. Here, we’ll break them down.
As of two weeks out, tickets for the big game on the official ticketing website were going for $6,000 or so, and that was for the so-called “cheap seats.” Premium seats in the lower bowl 50-yard line, sold by verified resellers, were listed at $20,000 a pop or higher.
While the game tickets are now 100% mobile, that hasn’t prevented scammers from trying to pass off phony tickets as the real deal. They’ll hawk those counterfeits in plenty of places online, sometimes in sites like your friendly neighborhood Craigslist.
So if you’re in the market for tickets, there are certainly a few things to look out for:
If you plan on enjoying the game closer to home, you may be in the market for some merch—a hat, a jersey, a tee, or maybe some new mugs for entertaining when you host the game at your place. With all the hype around the game, out will come scammers who set up bogus online stores. They’ll advertise items for sale but won’t deliver—leaving you a few dollars lighter and the scammers with your payment information, which they can use on their own for identity fraud.
You can shop safely with a few straightforward steps:
This is a great one to start with. Directly typing in the correct address for reputable online stores and retailers is a prime way to avoid scammers online. In the case of retailers that you don’t know much about, the U.S. Better Business Bureau (BBB) asks shoppers to do their research and make sure that retailer has a good reputation. The BBB makes that easier with a listing of retailers you can search simply by typing in their name.
If you feel like doing extra sleuthing, look up the address of the website and see when it was launched. A visit to the Internet Corporation for Assigned Names and Numbers (ICANN) at ICANN.org gives you the option to search a web address and see when it was launched, along with other information about who registered it. While a recently launched site is not an indicator of a scam site alone, sites with limited track records may give you pause if you want to shop there—particularly if there’s a chance it was just propped up by a scammer.
Look for the lock icon in your browser when you shop.
Secure websites begin their address with “https,” not just “http.” That extra “s” in stands for “secure,” which means that it uses a secure protocol for transmitting sensitive info like passwords, credit card numbers, and the like over the internet. It often appears as a little padlock icon in the address bar of your browser, so double-check for that. If you don’t see that it’s secure, it’s best to avoid making purchases on that website.
Credit cards are a good way to go. One reason why is the Fair Credit Billing Act, which offers protection against fraudulent charges on credit cards by giving you the right to dispute charges over $50 for goods and services that were never delivered or otherwise billed incorrectly. Your credit card companies may have their own policies that improve upon the Fair Credit Billing Act as well. Debit cards don’t get the same protection under the Act.
Comprehensive online protection software will defend against the latest virus, malware, spyware, and ransomware attacks plus further protect your privacy and identity. In addition to this, it can also provide strong password protection by generating and automatically storing complex passwords to keep your credentials safer from hackers and crooks who may try to force their way into your accounts. And, specific to the scams floating around this time of year, online protection can help prevent you from clicking links to known or suspected malicious sites.
It’s hard to watch sports these days without odds and stat lines popping up onto the screen, along with a fair share of ads that promote online betting. If you’re thinking about making things interesting with some betting, keep a few things in mind:
As it is every year, you’ll see kinds of sweepstakes and giveaways leading up to the game, plenty of them legitimate. Yet as they do, scammers will try and blend in by rolling out their own bogus promotions. Their aim: to part you from your cash or even your personal information.
A quick way to sniff out these scams is to take a close look at the promotion. For example, if it asks you to provide your bank information to send you your prize money, count on it being a scam. Likewise, if the promotion asks you to pay to claim a prize in some form or other, it’s also likely someone’s trying to scam you.
In all, steer clear of promotions that ask something for something in return, particularly if it’s your money or personal information.
As it is of late, all kinds of scams will try to glom onto the big game this year. And some of the best advice for avoiding them is not to give in to the hype. Scammers prey on scarcity, a sense of urgency, and keyed-up emotions in general. Their hope is that these things may make you less critical and more likely to overlook things that would otherwise seem sketchy or too good to be true. Staying focused as you shop, place a wager, or otherwise look to round out your enjoyment of the big game is some of your absolute best defense against scammers right now, and any time.
The post Super Scams – Beat the Online Scammers Who Want to Sack Your Big Game appeared first on McAfee Blog.
Beyoncé has officially announced her Cowboy Carter world tour, and the excitement is through the roof! With her last tour selling out in record time, fans know they need to act fast to secure their tickets. Unfortunately, that urgency is exactly what scammers prey on.
In 2022 alone, Americans lost nearly $8.8 billion to fraud, and ticket scams are one of the most common ways scammers cash in on eager fans. But don’t worry—we’ve got you covered. Before you rush to buy tickets to Beyoncé’s latest tour, here’s how to spot and avoid ticket scams so you don’t get left outside the stadium with nothing but regret.
Ticket scams come in different forms, but the most common ones include:
Scammers know how to create a sense of urgency, often advertising tickets to sold-out events at too-good-to-be-true prices. If you’re desperate to see Beyoncé, it’s easy to get caught up in the rush—but staying cautious can save you from getting scammed.
The best way to avoid being scammed is to buy only from reputable sources like official ticketing platforms (Ticketmaster, Live Nation, AXS) or directly from the event’s website. However, if you’re looking elsewhere, be on the lookout for these red flags:
When an event sells out, scammers flood social media with offers. Platforms like Facebook Marketplace, Instagram, and Craigslist are filled with fake ticket sellers. If you didn’t get tickets during the official sale, be cautious about where you’re looking.
Pro Tip: Follow Beyoncé’s official social media pages and event organizers for updates. Sometimes, extra dates or official resale opportunities become available.
Scammers often advertise tickets below face value to lure in victims. While real fans sometimes sell their tickets at a discount, it’s a huge red flag if the price is way lower than expected.
Pro Tip: If you’re buying from an individual, check their profile carefully. Look for signs of a fake account, such as recently created pages or multiple listings in different cities.
Some scammers go the extra mile, creating entire websites that mimic real ticket platforms. These fake sites not only sell counterfeit tickets but may also steal your credit card information.
Pro Tip: Always type in the official ticketing site’s URL manually or search for it on Google. Avoid clicking links from unknown sources, and double-check that the site uses “HTTPS” and has no misspellings in the URL.
Even if you get a real ticket, that doesn’t mean it’s yours alone. Some scammers sell the same ticket to multiple people, leading to chaos when multiple buyers show up at the event.
Pro Tip: Only buy from platforms that offer verified resale tickets with guarantees, like StubHub, SeatGeek, or VividSeats.
Some scammers sell general admission tickets as if they were premium seats. You may think you’re getting front-row access, only to find out you overpaid for a standing-room ticket.
Pro Tip: Always confirm the seat location with the seller. Many venues have seating charts available online, so check before purchasing.
Scammers hack into Ticketmaster accounts and transfer tickets to themselves, effectively locking the rightful owner out of their seats. Victims often receive a flood of emails, including notifications of ticket transfers they never authorized. By the time they realize what’s happened, their tickets are gone, likely resold by the scammer.
Pro Tip: To prevent this, ensure your Ticketmaster account is secure by using a strong password, enabling two-factor authentication, and being wary of suspicious login attempts or phishing emails.
To make sure you don’t fall victim to a ticket scam, follow these golden rules:
Buy from official sources – Beyoncé’s official website, Ticketmaster, and AXS are your safest bets.
Use a credit card – If something goes wrong, you can dispute the charge.
Be wary of social media sellers – If you’re buying from a stranger, research their profile and history first.
Check the URL – Make sure you’re on the real ticketing website before purchasing.
Avoid high-pressure sales tactics – Scammers want you to act fast—don’t fall for it!
Beyond ticket scams, cybercriminals also use major events like Beyoncé’s tour to spread malware and phishing attacks. McAfee’s comprehensive online protection can help keep your devices and personal information safe by blocking malicious websites, preventing identity theft, and alerting you to potential fraud.
Beyoncé’s Cowboy Carter tour is one of the most anticipated events of the year, and everyone wants to be part of the experience. But scammers know this too, and they’re out in full force. By staying smart, sticking to verified ticket sources, and being wary of deals that seem too good to be true, you can avoid scams and secure your spot at one of the biggest concerts of 2025.
Stay safe, Beyhive—and get ready to enjoy the show!
The post Buying Tickets for Beyoncé’s Cowboy Carter Tour? Don’t Let Scammers Ruin Your Experience appeared first on McAfee Blog.
The rise of AI-driven cyber threats has introduced a new level of sophistication to phishing scams, particularly those targeting Gmail users.
Criminals are using artificial intelligence to create eerily realistic impersonations of Google support representatives, Forbes recently reported. These scams don’t just rely on misleading emails; they also include convincing phone calls that appear to come from legitimate sources.
If you receive a call claiming to be from Google support, just hang up—this could be an AI-driven scam designed to trick you into handing over your Gmail credentials.
Here’s everything you need to know about the scam and how to protect yourself:
Hackers have devised a multi-step approach to trick users into handing over their Gmail credentials. Here’s how the scam unfolds:
The attack often begins with a phone call from what appears to be an official Google support number. The caller, using AI-generated voice technology, convincingly mimics a real Google representative. Their tone is professional, and the caller ID may even display “Google Support,” making it difficult to immediately recognize the scam.
Once engaged, the scammer informs the victim that suspicious activity has been detected on their Gmail account. They may claim that an unauthorized login attempt has occurred, or that their account is at risk of being locked. The goal is to create a sense of urgency, pressuring the victim to act quickly without thinking critically.
To appear credible, the scammer sends an email that looks almost identical to a real Google security notification. The email may include official-looking branding and a request to verify the user’s identity by entering a code. The email is designed to look so authentic that even tech-savvy individuals can be fooled.
If the victim enters the verification code, they inadvertently grant the attacker full access to their Gmail account. Since the scammer now controls the two-factor authentication process, they can lock the real user out, change passwords, and exploit the account for further attacks, including identity theft, financial fraud, or spreading phishing emails to others.
This scam is particularly dangerous because it combines multiple layers of deception, making it difficult to spot. Unlike standard phishing emails that may contain poor grammar or suspicious links, AI-enhanced scams:
To protect yourself from AI-powered scams, follow these essential security measures:
1. Be Skeptical of Unsolicited Calls from “Google”
Google does not randomly call users about security issues. If you receive such a call, hang up immediately and report the incident through Google’s official support channels.
2. Verify Security Alerts Directly in Your Account
If you receive a message stating that your account has been compromised, do not click any links or follow instructions from the email. Instead, go directly to your Google account’s security settings and review recent activity.
3. Never Share Verification Codes
Google will never ask you to provide a security code over the phone. If someone requests this information, it is a scam.
4. Enable Strong Authentication Methods
5. Regularly Monitor Your Account Activity
Check the “Security” section of your Google account to review login activity. If you see any unrecognized sign-ins, take immediate action by changing your password and logging out of all devices.
6. Use a Password Manager
A password manager helps create and store strong, unique passwords for each of your accounts. This ensures that even if one password is compromised, other accounts remain secure.
If you believe your account has been compromised, take these steps immediately:
As AI technology advances, cybercriminals will continue to find new ways to exploit users. By staying informed and implementing strong security practices, you can reduce the risk of falling victim to these sophisticated scams.
At McAfee, we are dedicated to helping you protect your digital identity. Stay proactive, stay secure, and always verify before you trust.
For more cybersecurity insights and protection tools, check out McAfee+.
The post How to Make Sure Your Gmail Account is Protected in Light of Recent AI Scams appeared first on McAfee Blog.
Video games are a favorite pastime for millions of kids and teenagers worldwide, offering exciting challenges, epic battles, and opportunities to connect with friends online. But what happens when the search for an edge in these games—like cheats or special hacks—leads to something far more dangerous?
McAfee Labs has uncovered a growing threat aimed at gamers, especially kids, who unknowingly download malware disguised as game hacks, software cracks, and cryptocurrency tools.
Here’s what you need to know about this sneaky scam and how to stay safe:
Popular games like Minecraft, Roblox, Fortnite, Apex Legends, and Call of Duty are among those targeted by these scams. Gamers searching for cheats to gain an advantage—like seeing through walls, speeding up characters, or unlocking premium items—are being lured to malicious links. These links often appear on GitHub, a platform where developers share and collaborate on code, or in YouTube videos claiming to offer step-by-step instructions.
GitHub is typically trusted by programmers and tech enthusiasts, but cybercriminals exploit this trust by uploading malware that masquerades as game hacks. By naming their repositories after popular games or tools, scammers trick users into downloading malware instead of the promised cheat software.
The process starts when someone searches online for free cheats or cracked software—like tools to unlock premium features of Spotify or Adobe—and stumbles upon a GitHub repository or a YouTube video. These repositories often look convincing, with professional descriptions, screenshots, and even licenses designed to appear legitimate.
Figure 1: Attack Vector
Once users follow the instructions, they’re often asked to disable their antivirus software or Windows Defender. The reasoning provided is that antivirus programs will mistakenly identify the hack or crack as dangerous. In reality, this step clears the way for malware to infect their device.
What Happens After the Malware is Downloaded?
Instead of receiving a functional cheat, victims unknowingly install a dangerous program known as Lumma Stealer or similar malware variants. This software quietly:
Each week, new repositories and malware variants appear as older ones are detected and removed. This cycle makes it difficult for platforms like GitHub to completely eliminate the threat.
Kids and teens are prime targets because they often lack experience in identifying online scams. The promise of features like “Aimbots” (to improve shooting accuracy) or “Anti-Ban” systems (to avoid getting caught by game administrators) makes these fake downloads even more tempting. Scammers exploit this curiosity and eagerness, making it easier to trick young gamers into infecting their devices.
Figure 2: YouTube Video containing malicious URL in description.
McAfee Labs offers these tips to avoid falling victim to these scams:
The takeaway? Scammers will go to great lengths to exploit the interests and habits of gamers. And unfortunately, this isn’t the first time we’ve seen such malware attacks targeting gamers. By educating yourself and your family about these threats, you can play smarter and stay safer online. Always remember: no cheat or crack is worth compromising your security.
Read the full report from McAfee Labs outlining our research and findings on this malware risk. Learn more about how you can protect yourself with McAfee+.
The post Scam Alert: Fake Minecraft, Roblox Hacks on YouTube Hide Malware, Target Kids appeared first on McAfee Blog.
The artificial intelligence arms race has a new disruptor—DeepSeek, a Chinese AI startup that has quickly gained traction for its advanced language models.
Positioned as a low-cost alternative to industry giants like OpenAI and Meta, DeepSeek has drawn attention for its rapid growth, affordability, and potential to reshape the AI landscape.
But as the buzz around its capabilities grows, so do concerns about data privacy, cybersecurity, and the implications of feeding personal information into AI tools with uncertain oversight.
DeepSeek’s AI models, including its latest version, DeepSeek-V3, claim to rival the most sophisticated AI systems developed in the U.S.—but at a fraction of the cost.
According to reports, training its latest model required just $6 million in computing power, compared to the billions spent by its American counterparts. This affordability has allowed DeepSeek to climb the ranks, with its AI assistant even surpassing ChatGPT as the top free app on Apple’s U.S. App Store.
What makes DeepSeek’s rise even more surprising is how abruptly it entered the AI race. The company originally launched as a hedge fund before pivoting to artificial intelligence—an unusual shift that has fueled speculation about how it managed to develop such advanced models so quickly. Unlike other AI startups that spent years in research and development, DeepSeek seemed to emerge overnight with capabilities on par with OpenAI and Meta.
However, DeepSeek’s meteoric rise has sparked skepticism. Some analysts and AI experts question whether its success is truly due to breakthrough efficiency or if it has leveraged external resources—potentially including restricted U.S. AI technology. OpenAI has even accused DeepSeek of improperly using its proprietary tech, a claim that, if proven, could have major legal and ethical ramifications.
One of the biggest concerns surrounding DeepSeek isn’t just how it handles user data—it’s that it reportedly failed to secure it altogether.
According to The Register, security researchers at Wiz discovered that DeepSeek left a database completely exposed, with no password protection, allowing public access to millions of chat logs, API keys, backend data, and operational details.
This means that conversations with DeepSeek’s chatbot, including potentially sensitive information, were openly available to anyone on the internet. Worse still, the exposure reportedly could have allowed attackers to escalate privileges and gain deeper access into DeepSeek’s infrastructure. While the issue has since been fixed, the incident highlights a glaring oversight: even the most advanced AI models are only as trustworthy as the security behind them.
Here’s why caution is warranted:
DeepSeek specifically states in its terms of service that it collects, stores, and has permission to share just about all the data you provide while using the service.
Figure 1. Screenshot of DeepSeek Privacy Policy shared on LinkedIn
It specifically notes collecting your profile information, credit card details, and any files or data shared in chats. What’s more, that data isn’t stored in the United States, which has strict data privacy regulations. DeepSeek is a Chinese company with limited required protections for U.S. consumers and their personal data.
If you’re using AI tools—whether it’s ChatGPT, DeepSeek, or any other chatbot—it’s crucial to take steps to protect your information:
As AI chatbots like DeepSeek gain popularity, safeguarding your personal data is more critical than ever. With McAfee’s advanced security solutions, including identity protection and AI-powered threat detection, you can browse, chat, and interact online with greater confidence—because in the age of AI, privacy is power.
The post Explaining DeepSeek: The AI Disruptor That’s Raising Red Flags for Privacy and Security appeared first on McAfee Blog.
Identity theft is a growing concern, and Data Privacy Week serves as an important reminder to safeguard your personal information. In today’s digital age, scammers have more tools than ever to steal your identity, often with just a few key details—like your Social Security number, bank account information, or home address.
Unfortunately, identity theft claims have surged in recent years, jumping from approximately 650,000 in 2019 to over a million in 2023, according to the Federal Trade Commission (FTC). This trend underscores the urgent need for stronger personal data protection habits.
So, how do scammers pull it off, and how can you protect yourself from becoming a victim?
How Do Scammers Steal Your Identity?
Scammers are resourceful, and there are multiple ways they can access your personal information. The theft can happen both in the physical and digital realms.
When scammers steal your identity, they often leave behind a trail of unusual activity that you can detect. Here are some common signs that could indicate identity theft:
If you suspect that your identity has been stolen, time is of the essence. Here’s what you need to do:
While you can’t completely eliminate the risk of identity theft, there are several steps you can take to protect yourself:
Identity theft can be a stressful and overwhelming experience, but by acting quickly and taking proactive steps to protect your personal information, you can minimize the damage and reclaim your identity.
The post How Scammers Steal Your Identity and What You Can Do About It appeared first on McAfee Blog.
Authored by Anuradha, Sakshi Jaiswal
In 2024, scams in India have continued to evolve, leveraging sophisticated methods and technology to exploit unsuspecting individuals. These fraudulent activities target people across demographics, causing financial losses and emotional distress. This blog highlights some of the most prevalent scams this year, how they operate, some real-world scenarios, tips to stay vigilant and what steps to be taken if you become a victim.
This blog covers the following scams:
Scam Tactics:
Fraudsters on WhatsApp employ deceptive tactics to steal personal information, financial data, or gain unauthorized access to accounts. Common tactics include:
Case 1: In the figure below, a user is being deceived by a message originating from the +244 country code, assigned to Angola. The message offers an unrealistic investment opportunity promising a high return in just four days, which is a common scam tactic. It uses pressure and informal language, along with a link for immediate action.
Case 2: In the figure below, a user is being deceived by a message originating from the +261 country code, assigned to Madagascar. The message claims that you have been hired and asks you to click a link to view the offer or contact the sender which is a scam.
Case 3: In the figure below, a user is being deceived by a message originating from the +91 country code, assigned to India. Scammers may contact you, posing as representatives of a legitimate company, offering a job opportunity. The recruiter offers an unrealistic daily income (INR 2000–8000) for vague tasks like searching keywords, which is suspicious. Despite requests, they fail to provide official company details or an email ID, raising credibility concerns. They also ask for personal information prematurely, a common red flag.
Case 4: In the figure below, a user is being deceived by a message originating from the +84 country code, assigned to Vietnam. The offer to earn money by watching a video for just a few seconds and providing a screenshot is a common tactic used by scammers to exploit individuals. They may use the link to gather personal information, or your action could lead to phishing attempts.
Case 5: In the figure below, a user is being misled by a message originating from the country codes +91, +963, and +27, corresponding to India, Syria, and South Africa, respectively. The message claims to offer a part-time job with a high salary for minimal work, which is a common tactic used by scammers to lure individuals. The use of popular names like “Amazon” and promises of easy money are red flags. The link provided might lead to phishing attempts or data theft. It’s important not to click on any links, share personal details, or respond to such unsolicited offers.
Case 6: The messages encourage you to post fake 5-star reviews for businesses in exchange for a small payment, which is unethical and often illegal. Scammers use such tactics to manipulate online ratings, and the provided links could lead to phishing sites or malware. Avoid engaging with these messages, clicking on the links, or participating in such activities.
How to Identify WhatsApp Scams:
Impact:
Prevention:
Scam Tactics:
How to Identify Instant Loan Scam:
Impact:
Prevention:
Voice-cloning scams use advanced AI technology to replicate the voices of familiar people, such as friends, family members, or colleagues, to manipulate victims into transferring money or providing sensitive information.
Scam Tactics:
How to Identify AI Voice-Cloning Scams:
Impact:
Prevention
Scam Tactics
Scammers use various methods to deceive victims into revealing credit card information or making unauthorized payments:
How to identify Credit card scam:
Impact:
Prevention:
Scam Tactics:
In fake delivery scams, fraudsters pose as delivery services to trick you into providing personal information, card details, or payment. Common tactics include:
How to Identify Fake Delivery Scams:
Impact:
Prevention:
Scam Tactics:
Scammers pose as police officers or government officials, accusing victims of being involved in illegal activities like money laundering or cybercrime. They intimidate victims by threatening arrest or legal action unless immediate payment is made to “resolve the matter.”
How to Identify Digital Arrest Scam:
Impact: Daily losses from such scams run into lakhs, as victims panic and transfer money or provide sensitive information under pressure.
Prevention:
What to Do if You Fall Victim
If you’ve fallen victim to any of the mentioned scams—Digital Arrest Scam, Instant Loan Scam, Voice Cloning Scam, WhatsApp Scam, Fake Delivery Scam or Credit Card Scam—it’s important to take immediate action to minimize damage and protect your finances and personal information. Here are common tips and steps to follow for all these scams:
Conclusion:
As scams in India continue to grow in number and sophistication, it is crucial to raise awareness to protect individuals and businesses from falling victim to these fraudulent schemes. Scams such as phishing, fake job offers, credit card scams, loan scams, investment frauds and online shopping frauds are increasingly targeting unsuspecting victims, causing significant financial loss and emotional harm.
By raising awareness of scam warning signs and encouraging vigilance, we can equip individuals to make safer, more informed decisions online. Simple precautions, such as verifying sources, being cautious of unsolicited offers, and safeguarding personal and financial information, can go a long way in preventing scams.
It is essential for both individuals and organizations to stay informed and updated on emerging scam tactics. Through continuous awareness and proactive security measures, we can reduce the impact of scams, ensuring a safer and more secure digital environment for everyone in India.
The post Rising Scams in India: Building Awareness and Prevention appeared first on McAfee Blog.
Romance scams have surged in sophistication, preying on emotions and exploiting the trust of victims in the digital age.
The latest case involving a French woman who believed she was romantically involved with actor Brad Pitt is a stark reminder of the vulnerabilities we face online. But this incident, unfortunately, does not stand alone. Scammers continue to exploit celebrity fame to defraud unsuspecting victims, using deepfakes and other manipulative tactics. Recent examples include:
The most recent Brad Pitt impersonation scam follows a straightforward but insidious pattern of manipulation. Here’s how the scam unfolded step by step:
The Initial Contact: Anne, a French interior decorator, downloaded Instagram during a family ski trip. Shortly after, she was approached by a scammer pretending to be Brad Pitt’s mother, who claimed her son needed someone like Anne in his life.
Building Trust: The scammer, posing as Pitt, used AI-generated photos and emotionally charged messages to gain Anne’s trust. The fake Brad Pitt “knew how to talk to women,” according to Anne, creating a sense of intimacy and connection.
Figure 1. These fake images were used in a fake Brad Pitt romance scam.
The Financial Request: The scammer fabricated a crisis, claiming Pitt needed $1 million for a kidney treatment but couldn’t access his funds due to his ongoing divorce from Angelina Jolie. Playing on Anne’s empathy, the fraudster requested financial help.
The Emotional Manipulation: At the time, Anne was going through her own divorce and had recently received a settlement. Believing she was aiding someone in need, she transferred $850,000 to the scammer.
The Scam Unravels: The hoax came to light after Pitt publicly debuted his relationship with Ines de Ramon at the Venice Film Festival. This contradiction exposed the deception and ended the scam.
Brad Pitt recently spoke out, according to Variety, condemning the scammers for taking “advantage of the strong bond between fans and celebrities.”
Romance scammers often exploit online dating platforms, social media, and fan communities to identify potential victims. Being aware of the warning signs can help you identify and avoid romance scams:
Unrealistic Claims: If someone’s story seems too good to be true, it likely is. For example, a Hollywood star personally reaching out on a fan site is improbable. Celebrities rarely engage in direct, personal communication with fans, especially through unofficial platforms like fan sites, due to time constraints, security concerns, and the sheer volume of fan interactions.
Urgent Requests for Money: Scammers often fabricate crises requiring immediate financial assistance.
Reluctance to Meet in Person: Excuses to avoid face-to-face meetings or video calls can signal deception.
Inconsistencies in Their Story: Contradictory details or vague answers are common red flags.
Pressure to Keep the Relationship Secret: Scammers may isolate victims by discouraging them from discussing the relationship with friends or family.
While the tactics of romance scammers can be sophisticated, there are steps you can take to safeguard your heart and your finances:
Verify Identities: Use reverse image searches to check if profile pictures are stolen. Research their claims and background.
Be Cautious with Personal Information: Avoid sharing sensitive details, such as financial information or passwords.
Avoid Sending Money: Never transfer funds to someone you haven’t met in person, regardless of their story.
Keep Conversations Public: Use the messaging platform of the dating site or social media app rather than moving to private communication.
Watch Out For in AI: Artificial intelligence (AI) has made it much easier for scammers to create deepfake audio and video to create even more realistic romance scams. McAfee’s Ultimate Guide to AI Deepfakes can help you learn how to spot and protect yourself from deepfakes.
Trust Your Instincts: If something feels off, listen to your intuition, which can pick up on subtle inconsistencies or red flags that your conscious mind may overlook, acting as an early warning system.
Figure 2. An AI-generated image that circulated widely showed the Pope wearing a designer coat.
If you believe you are being targeted by a romance scam, take the following steps:
Cease Communication: Stop interacting with the individual immediately.
Report the Incident: Notify the dating platform or social media site, and report the scam to your local authorities or organizations like the FTC.
Protect Your Accounts: Change passwords and monitor your financial accounts for suspicious activity.
Seek Support: Talk to trusted friends or family members about the situation.
Raising awareness about romance scams is essential in preventing others from falling victim. Share information about common tactics and red flags with your loved ones, particularly those who may be more vulnerable, such as elderly family members or friends navigating online dating for the first time.
While the promise of romance can be enticing, it’s crucial to approach online relationships with caution and awareness.
By recognizing red flags, protecting your personal information, and reporting suspicious activity, you can safeguard yourself and others from the emotional and financial devastation of romance scams.
The post Breaking Down the Brad Pitt Scam: How it Happened and What We Can Learn appeared first on McAfee Blog.
Authored by Aayush Tyagi
Video game hacks, cracked software, and free crypto tools remain popular bait for malware authors. Recently, McAfee Labs uncovered several GitHub repositories offering these tempting “rewards,” but a closer look reveals something more sinister. As the saying goes, if it seems too good to be true, it probably is.
GitHub is often exploited for malware distribution due to its accessibility, trustworthiness, and developer-friendly features. Attackers can easily create free accounts and host repositories that appear legitimate, leveraging GitHub’s reputation to deceive users.
McAfee Labs encountered multiple repositories, offering game hacks for top-selling video games such as Apex Legends, Minecraft, Counter Strike 2.0, Roblox, Valorant,
Fortnite, Call of Duty, GTA V and or offering cracked versions of popular software and services, such as Spotify Premium, FL Studio, Adobe Express, SketchUp Pro, Xbox Game Pass, and Discord to name a few.
These attack chains begin when users would search for Game Hacks, cracked software or tools related to Cryptocurrency on the internet, where they would eventually come across GitHub repositories or YouTube Videos leading to such GitHub repositories, offering such software.
We noticed a network of such repositories where the description of software keeps on changing, but the payload remains the same: a Lumma Stealer variant. Every week, a new set of repositories with a new malware variant is released, as the older repositories are detected and removed by GitHub. These repositories also include distribution licenses and software screenshots to enhance their appearance of legitimacy.
Figure 1: Attack Vector
These repositories also contain instructions on how to download and run the malware and ask the user to disable Windows Defender or any AV software, before downloading the malware. They provide the reasoning that, since the software is related to game hacks or by-passing software authentication or crypto-currency mining, AV products will detect and delete these applications.
This social engineering technique, combined with the trustworthiness of GitHub works well in the favor of malware authors, enabling them to infect more users.
Children are frequently targeted by such scams, as malware authors exploit their interest in game hacks by highlighting potential features and benefits, making it easier to infect more systems.
As discussed above, the users would come across malicious repositories through searching the internet (highlighted in red).
Figure 2: Internet Search showing GitHub results.
Or through YouTube videos, that contain a link to the repository in the description (highlighted in red).
Figure 3: YouTube Video containing malicious URL in description.
Once the user accesses the GitHub repository, it contains a Distribution license and other supporting files, to trick the user into thinking that the repository is genuine and credible.
Figure 4: GitHub repository containing Distribution license.
Repositories also contain a detailed description of the software and installation process further manipulating the user.
Figure 5: Download instructions present in the repository.
Sometimes, the repositories contain instructions to disable AV products, misleading users to infect themselves with the malware.
Figure 6: Instructions to disable Windows Defender.
To target more children, repositories contain a detailed description of the software; by highlighting all the features included within the package, such as Aimbots and Speed Hacks, and how easily they will be able to gain an advantage over their opponents.
They even mention that the package comes with advance Anti-Ban system, so their account won’t be suspended, and that the software has a popular community, to create a perception that, since multiple users are already using this software, it must be safe to use and that, by not using the software, they are missing out.
Figure 7: Features mentioned in the GitHub repository.
The downloaded files, in most cases, were Lumma Stealer variants, but observing the latest repositories, we noticed new malware variants were also being distributed through the same infection vector.
Once the user downloads the file, they get the following set of files.
Figure 8: Files downloaded from GitHub repository.
On running the ‘Loader.exe’ file, as instructed, it iterates through the system and the registry keys to collect sensitive information.
Figure 9: Loader.exe checking for Login credentials for Chrome.
It searches for crypto wallets and password related files. It searches for a list of browsers installed and iterates through user data, to gather anything useful.
Figure 10: Loader.exe checking for Browsers installed on the system.
Then the malware connects to C2 servers to transfer data.
Figure 11: Loader.exe connecting to C2 servers to transfer data.
This behavior is similar to the Lumma Stealer variants we have seen earlier.
McAfee blocks this infection chain at multiple stages:
Figure 12: McAfee blocking URLs
Figure 13: McAfee blocking the malicious file
In conclusion, the GitHub repository infection chain demonstrates how cybercriminals exploit accessibility and trustworthiness of popular websites such as GitHub, to distribute malware like Lumma Stealer. By leveraging the user’s desire to use game hacks, to be better at a certain video game or obtain licensed software for free, they trick users into infecting themselves.
At McAfee Labs, we are committed to helping organizations protect themselves against sophisticated cyber threats, such as the GitHub repository technique. Here are our recommended mitigations and remediations:
As of publishing this blog, these are the GitHub repositories that are currently active.
File Type | SHA256/URLs |
URLs | github[.]com/632763276327ermwhatthesigma/hack-apex-1egend |
github[.]com/VynnProjects/h4ck-f0rtnite | |
github[.]com/TechWezTheMan/Discord-AllinOne-Tool | |
github[.]com/UNDERBOSSDS/ESET-KeyGen-2024 | |
github[.]com/Rinkocuh/Dayz-Cheat-H4ck-A1mb0t | |
github[.]com/Magercat/Al-Photoshop-2024 | |
github[.]com/nate24321/minecraft-cheat2024 | |
github[.]com/classroom-x-games/counter-str1ke-2-h4ck | |
github[.]com/LittleHa1r/ESET-KeyGen-2024 | |
github[.]com/ferhatdermaster/Adobe-Express-2024 | |
github[.]com/CrazFrogb/23fasd21/releases/download/loader/Loader[.]Github[.]zip | |
github[.]com/flashkiller2018/Black-Ops-6-Cheats-including-Unlocker-Tool-and-RICOCHET-Bypass | |
github[.]com/Notalight/h4ck-f0rtnite | |
github[.]com/Ayush9876643/r0blox-synapse-x-free | |
github[.]com/FlqmzeCraft/cheat-escape-from-tarkov | |
github[.]com/Ayush9876643/cheat-escape-from-tarkov | |
github[.]com/Ayush9876643/rust-hack-fr33 | |
github[.]com/ppetriix/rust-hack-fr33 | |
github[.]com/Ayush9876643/Roblox-Blox-Fruits-Script-2024 | |
github[.]com/LandonPasana21/Roblox-Blox-Fruits-Script-2024 | |
github[.]com/Ayush9876643/Rainbow-S1x-Siege-Cheat | |
github[.]com/Ayush9876643/SonyVegas-2024 | |
github[.]com/123456789433/SonyVegas-2024 | |
github[.]com/Ayush9876643/Nexus-Roblox | |
github[.]com/cIeopatra/Nexus-Roblox | |
github[.]com/Ayush9876643/m0dmenu-gta5-free | |
github[.]com/GerardoR17/m0dmenu-gta5-free | |
github[.]com/Ayush9876643/minecraft-cheat2024 | |
github[.]com/RakoBman/cheat-apex-legends-download | |
github[.]com/Ayush9876643/cheat-apex-legends-download | |
github[.]com/cIiqued/FL-Studio | |
github[.]com/Ayush9876643/FL-Studio | |
github[.]com/Axsle-gif/h4ck-f0rtnite | |
github[.]com/Ayush9876643/h4ck-f0rtnite | |
github[.]com/SUPAAAMAN/m0dmenu-gta5-free | |
github[.]com/atomicthefemboy/cheat-apex-legends-download | |
github[.]com/FlqmzeCraft/cheat-escape-from-tarkov | |
github[.]com/Notalight/h4ck-f0rtnite | |
github[.]com/Notalight/FL-Studio | |
github[.]com/Notalight/r0blox-synapse-x-free | |
github[.]com/Notalight/cheat-apex-legends-download | |
github[.]com/Notalight/cheat-escape-from-tarkov | |
github[.]com/Notalight/rust-hack-fr33 | |
github[.]com/Notalight/Roblox-Blox-Fruits-Script-2024 | |
github[.]com/Notalight/Rainbow-S1x-Siege-Cheat | |
github[.]com/Notalight/SonyVegas-2024 | |
github[.]com/Notalight/Nexus-Roblox | |
github[.]com/Notalight/minecraft-cheat2024 | |
github[.]com/Notalight/m0dmenu-gta5-free | |
github[.]com/ZinkosBR/r0blox-synapse-x-free | |
github[.]com/ZinkosBR/cheat-escape-from-tarkov | |
github[.]com/ZinkosBR/rust-hack-fr33 | |
github[.]com/ZinkosBR/Roblox-Blox-Fruits-Script-2024 | |
github[.]com/ZinkosBR/Rainbow-S1x-Siege-Cheat | |
github[.]com/ZinkosBR/Nexus-Roblox | |
github[.]com/ZinkosBR/m0dmenu-gta5-free | |
github[.]com/ZinkosBR/minecraft-cheat2024 | |
github[.]com/ZinkosBR/h4ck-f0rtnite | |
github[.]com/ZinkosBR/FL-Studio | |
github[.]com/ZinkosBR/cheat-apex-legends-download | |
github[.]com/EliminatorGithub/counter-str1ke-2-h4ck | |
Github[.]com/ashishkumarku10/call-0f-duty-warz0ne-h4ck | |
EXEs | CB6DDBF14DBEC8AF55986778811571E6 |
C610FD2A7B958E79F91C5F058C7E3147 | |
3BBD94250371A5B8F88B969767418D70 | |
CF19765D8A9A2C2FD11A7A8C4BA3DEDA | |
69E530BC331988E4E6FE904D2D23242A | |
35A2BDC924235B5FA131095985F796EF | |
EB604E2A70243ACB885FE5A944A647C3 | |
690DBCEA5902A1613CEE46995BE65909 | |
2DF535AFF67A94E1CDAD169FFCC4562A | |
84100E7D46DF60FE33A85F16298EE41C | |
00BA06448D5E03DFBFA60A4BC2219193 | |
C2 Domains | 104.21.48.1 |
104.21.112.1 | |
104.21.16.1 |
The post GitHub’s Dark Side: Unveiling Malware Disguised as Cracks, Hacks, and Crypto Tools appeared first on McAfee Blog.
Inauguration Day has come and gone, and the peaceful transfer of power couldn’t have happened without the intricate systems that ensure the integrity of the electoral process—specifically, cybersecurity.
Behind the scenes, a vast network of digital defenses worked to protect elections from disinformation, cyberattacks, and manipulation, all of which pose increasing threats in today’s digital age. From securing ballots to combating deepfakes, these measures play a critical role in upholding trust in democracy and making days like Inauguration Day possible.
In the digital age, elections face unprecedented threats designed to undermine public trust and disrupt democratic processes. Among the most common challenges are:
These threats highlight the urgent need for robust cybersecurity measures to protect the democratic process.
To counter these threats, governments and organizations have implemented advanced strategies and technologies:
These measures are critical in securing the journey from Election Day to Inauguration Day, building public confidence in the democratic process.
As you consume news about the inauguration and the new administration, it’s more important than ever to be vigilant about fake news. Fake news crops up in plenty of places on social media. And it has for some time now. In years past, it took the form of misleading posts, image captions, quotes, and the sharing of outright false information in graphs and charts. Now with the advent of AI, we see fake news taken to new levels of deception:
It’s critical to be wary of disinformation, intentionally misleading information manipulated to create a flat-out lie, as well as misinformation, which may include social posts that unknowingly get facts wrong.
To combat misinformation and AI deepfakes, it’s key to:
Deepfakes don’t just spread false information—they often lead users to phishing sites or malware. With tools like McAfee+, you can navigate the digital landscape with confidence.
The post From Election Day to Inauguration: How Cybersecurity Safeguards Democracy appeared first on McAfee Blog.
The devastating wildfires sweeping through Southern California have left countless neighborhoods in ruins, forcing thousands to evacuate and destroying homes in their path. While many people across the nation are moved to support those affected, this goodwill often becomes a target for opportunistic cybercriminals. McAfee researchers have discovered that social media networks have been flooded with deceptive images, showing how cryptocurrencies can be used to make donations for fire relief efforts. We believe these to be scams trying to dupe consumers. McAfee CTO, Steve Grobman says, “It’s really unfortunate because it’s such a tragic event, and we’re seeing cybercriminals and scammers take advantage of the situation in a whole host of ways, from fake GoFundMe sites to fraudulent campaign donation pages.”
Figure 1. Cryptocurrency Donation Requests
Steve continues, “The use of generative AI has fueled the creation of fake content, like viral images of the Hollywood sign engulfed in flames, which our deepfake detection technology confirmed were AI-generated. These tools are helping scammers misrepresent reality and exploit public emotions. We’ve seen fake accounts impersonating celebrities like Emma Watson and Kim Kardashian, promoting nonexistent charities to deceive people into donating money.”
The average American encounters a staggering 14.4 scam messages and deepfakes daily through social media, text messages, and emails, according to McAfee’s latest “State of the Scamiverse” report.
Now, think about this: even in your everyday life, that’s a lot of noise to sift through. But when you’re in the chaos of recovering from a disaster like a wildfire—juggling insurance claims, emergency communications, and rebuilding your life—the sheer volume of scams adds another layer of overwhelm. It’s a perfect storm for distraction, making it even easier for cybercriminals to exploit your vulnerability. Here’s what you need to know to protect yourself from scams while providing genuine help to wildfire victims.
Natural disasters and major news events provide fertile ground for cybercriminals. Cliff Steinhauer, Director of Information Security at the National Cybersecurity Alliance, explains that people eager to help during a crisis can act emotionally, skipping necessary steps to verify the legitimacy of donation platforms or relief efforts.
Scammers watch disaster news closely to craft scams tailored to the event. The emotional urgency surrounding a catastrophe like the California wildfires increases the likelihood of falling victim to these attacks.
A recent McAfee survey found that 59% of Americans say they or someone they know has been the victim of an online scam. 84% of these people lost money to the scam, with an average loss of $1,471 – and nearly 1 in 10 scam victims lost over $5,000
Many scams during crises fall under the umbrella of social engineering, a tactic where attackers manipulate people into divulging sensitive information or funds. Here are some of the most common schemes to watch out for:
Scammers often create counterfeit websites or social media posts masquerading as legitimate charities. These pages may look convincing but divert donations into the hands of criminals.
Emails, texts, and phone calls pretending to be from government agencies or well-known charities may attempt to steal personal data or payment details.
Victims of disasters are especially vulnerable. Scammers might pose as organizations offering aid, only to harvest sensitive information like bank account details or steal identities.
Modern scammers use AI to craft phishing attempts that are harder to spot. Unlike older scams with obvious grammar mistakes, AI-generated messages can appear professional and persuasive.
Figure 2. Fake Celebrity Donation Requests
Whether you’re donating to wildfire relief efforts or seeking aid, these steps can help protect you:
Use trusted resources like Give.org or Charity Navigator to confirm the legitimacy of charities.
Platforms like GoFundMe now provide verified lists of fundraisers for disaster relief.
Be wary of websites with misspelled URLs or unusual domain extensions. Look for “https” and padlock symbols to confirm the site is secure.
Phishing attempts often come via unsolicited emails, texts, or social media ads. Instead of clicking, go directly to a charity’s official website by typing its address into your browser.
Not all paid advertisements on platforms like Facebook or Instagram are legitimate. Avoid providing personal or payment information through these channels without verification.
Be cautious of campaigns that fail to explain how your donation will be used. Reputable organizations are transparent about how funds are allocated.
Steve Grobman states, “If consumers want to help with relief efforts, they should always go to validated organizations and use payment methods with protections, like credit cards. Wiring money or using cryptocurrency can make it nearly impossible to recover funds if it turns out to be fraudulent. While many GoFundMe sites are legitimate, scammers exploit the platform’s low barrier to entry. Consumers should verify campaigns with the individuals or families they claim to support or stick to reputable charities.”
In the aftermath of California wildfires, staying vigilant is essential. While most people are dedicated to recovery and support, a few may attempt to exploit the situation. By learning to spot common scams and taking proactive steps, you can safeguard yourself and your community from additional harm. Use a robust and trustworthy scam detection tool. McAfee can block risky sites even if you accidentally click a link in a scam text. When it comes to text messages, our smart AI puts a stop to scams before you click—detecting any suspicious links and sending you an alert.
In an age where deepfake technology is becoming increasingly sophisticated, protecting yourself from manipulated videos, audio, and images is critical. McAfee Deepfake Detector is designed to safeguard individuals and organizations by identifying and alerting you to potential deepfakes, ensuring that you can trust what you see and hear online.
The post Scammers Exploit California Wildfires: How to Stay Safe appeared first on McAfee Blog.
Amid the devastation of the Los Angeles County wildfires – scorching an area twice the size of Manhattan – McAfee threat researchers have identified and verified a rise in AI-generated deepfakes and misinformation, including startling but false images of the Hollywood sign engulfed in flames.
Social media and local broadcast news have been flooded with deceptive images claiming the Hollywood sign is engulfed in flames, with many people alleging that the iconic landmark is “surrounded by fire.”
Figure 1. AI-generated image shared on Facebook on January 9th, 2025.
Fact check: The Hollywood sign is still standing and is intact. A live feed of the Hollywood sign clearly shows the sign is not currently in harm’s way or engulfed in flames.
Figure 2: Live view of the Hollywood sign taken at 3.29 PT on Friday, January 10th 2025.
McAfee researchers have examined dozens of images shared across X, Facebook, Tik Tok and Instagram, and have verified these are indeed AI-generated images and videos. In addition to analysis from our own threat researchers, McAfee’s image deepfake detection technology has flagged images shown here (and many more) of the Hollywood Hills as AI-generated, with the fire serving as a key factor in its analysis.
McAfee’s investigation traced many of the images back to Gemini, an AI-based image generation platform. This finding underscores the increasing sophistication of fake image synthesis, where fake images and videos can be created in mere seconds, but can be spread to more than a million views in just 24 hours, such as is the case with the social post shared on Facebook below.
Figure 3: Screenshot of deepfake video of Hollywood sign on fire. This video was discovered on Facebook and had already achieved 1.3 million views in 24 hours.
McAfee CTO, Steve Grobman states, “AI tools have supercharged the spread of disinformation and misinformation, enabling false content—like recent fake images of the Hollywood sign engulfed in flames—to circulate at unprecedented speed. This makes it critical for social media users to keep their guard up, approach viral posts with skepticism, and verify sources to distinguish fact from fiction.”
Figure 4. McAfee’s advanced AI models identifies images that have been modified or created using AI. The heatmap depicts areas that have been used to identify and confirm AI-usage.
AI-generated still images are incredibly easy to produce. In less than a minute, we were able to produce a convincing image of the Hollywood Hills sign on fire for free with AI image generating Android app (we have not published these images, only those found on social media). Many of these apps exist to choose from. Some do filter for violent and other objectionable content. However, images like the Hollywood Hills sign on fire, fall outside of normal guardrails. Additionally, the business model of many of these apps include free credits as a trial, making it quick and easy to create and share. AI image generation is a widely available and easily accessible tool used in many misinformation campaigns.
See below for more examples:
Figure 5. Examples on Instagram.
Upon closer inspection, some images had watermark images clearly labeled from Generative AI tools such as Grok. And while this might be an obvious telltale sign for some people, there are many others who are not familiar with or recognize such watermarks.
Figure 6. The Grok watermark is clearly visible in the image above.
There are several straightforward steps that you can take to spot a fake. We recommend a combination of healthy skepticism and awareness combined with the right technology, such as McAfee Deepfake Detector.
While not all AI is malicious or ‘bad’, this technology is commonly used by bad actors for malicious intent when it comes to deepfake scams, misinformation and disinformation. While the deepfakes outlined here appear to be without malicious intent – other than to misinform social media users – we could expect these to evolve where scammers create similar deepfakes as part of fake donation scams, and so we advise everyone to stay vigilant and learn more on how to spot deepfakes online:
Plenty of deepfakes can lure you into sketchy corners of the internet. Places where malware and phishing sites take root. Consider using comprehensive online protection software with McAfee+ and McAfee Deepfake Detector to keep safe. In addition to several features that protect your devices, privacy, and identity, they can warn you of unsafe sites too.
The post The Hollywood Sign is Not on Fire: Deepfakes Spread During L.A. Wildfires appeared first on McAfee Blog.
As CES kicks off in Las Vegas, McAfee proudly stands at the forefront of innovation, showcasing our leadership in AI and our commitment to driving transformative breakthroughs in tech. Here are the key highlights of McAfee’s participation at CES 2025:
At CES, we are announcing McAfee Scam Detector – the most comprehensive protection against text, email, and video scams. Today’s scams are smarter, sneakier, and more convincing than ever. We’re helping consumers take back control with AI-powered scam detection to stop scammers in their tracks.
Tuesday Spotlight:
Dan Huynh, McAfee’s VP of Business Development, joins a panel of business leaders to explore the capabilities of AI-powered PCs. From enhanced video and photo editing to faster computing speeds and improved security, this session delves into how AI PCs are reshaping work, play, and creativity.
McAfee has announced an exciting partnership with AMD to combat deepfake scams and misinformation. The McAfee Deepfake Detector now leverages the Neural Processing Unit (NPU) in AMD Ryzen AI 300 Series processors, enabling faster and more accurate detection of manipulated content.
Qualcomm is also showcasing McAfee’s Deepfake Detector technology at CES, with demos running on their high-performance, low-powered AI silicon. These demonstrations highlight McAfee’s commitment to tackling the growing threat of malicious AI deepfakes.
Thursday Spotlight:
German Lancioni, McAfee’s Chief AI Scientist, takes the stage to discuss using AI as a tool against AI-generated disinformation. This session will tackle the question: How can people trust what they see in a world of malicious AI deepfakes?
As CES 2025 unfolds, McAfee is proud to lead the charge in addressing the challenges and opportunities that AI brings to our increasingly digital world. Through groundbreaking innovations, strategic partnerships, and thought leadership, we’re not just imagining the future of tech—we’re actively shaping it.
We invite you to join us and our partners at CES to experience our cutting-edge technologies firsthand, engage with experts, and learn how McAfee is redefining security in the age of AI. Together, we’re building a safer, smarter, and more trusted digital landscape for everyone. Stay tuned for more updates as we continue to push the boundaries of what’s possible.
The post McAfee Shines at CES 2025: Redefining AI Protection for All appeared first on McAfee Blog.
For less than the cost of a latte and in under 10 minutes, scammers today can create shockingly convincing deepfake videos of anyone: your mom, your boss, or even your child.
Imagine receiving a video call from your mom asking to borrow money for an emergency, or getting a voicemail from your boss requesting urgent access to company accounts. These scenarios might seem straightforward, but in 2025, they represent a growing threat: deepfake scams that can be created for just $5 in under 10 minutes. According to McAfee’s latest “State of the Scamiverse” report, deepfake scams have become an everyday reality. The average American now encounters 2.6 deepfake videos daily, with younger adults (18-24) seeing even more – about 3.5 per day. These aren’t just celebrity face-swaps or entertaining memes; they’re sophisticated scams designed to separate people from their money.
Welcome to the Scamiverse: an ever-expanding realm of online scams and fraud that’s targeting people everywhere. Despite increasing awareness, scams are on the rise globally, costing victims money, time, and emotional well-being. Understanding this evolving landscape is key to staying protected.
According to McAfee’s December 2024 survey of 5,000 adults:
Beyond financial losses, there’s a significant emotional toll. More than a third of victims reported moderate to significant distress after falling for an online scam, with many spending over a month trying to resolve the resulting issues. Deepfake scams surged tenfold in 2024, with North America experiencing a jaw-dropping 1,740% increase. Over 500,000 deepfakes circulated on social media in 2023 alone. Unsurprisingly, two-thirds of people report being more worried about scams than ever before.
Deepfakes are no longer futuristic tech—they’re an everyday reality. McAfee’s survey showed:
Deepfake videos are most commonly encountered on:
Platform | % Reporting Deepfakes |
68% | |
30% | |
TikTok | 28% |
X (formerly Twitter) | 17% |
Interestingly, different age groups tend to encounter deepfakes on different platforms. While older Americans are more likely to see them on Facebook (over 80% of those 65+ report this), younger users more frequently encounter them on Instagram and TikTok. Younger Americans encounter more deepfakes (3.5 daily for ages 18-24) than older groups (1.2 for ages 65+), while seniors report higher exposure to deepfakes on Facebook.
Deepfakes leverage generative AI to create convincing fake videos and audio. Initially popularized through memes featuring celebrities like Tom Cruise and Mark Zuckerberg, deepfakes are now weaponized by scammers. These tools can:
McAfee Labs tested 17 deepfake creation tools, finding that scammers can:
These tools enable scammers to achieve professional-grade results with minimal effort, making deepfake scams increasingly accessible.
The McAfee survey highlighted a wide range of scams. Some frequently involve deepfakes, such as:
Scam Type | % Reporting |
Fake shipping notifications | 36% |
Fake news videos | 21% |
Celebrity endorsement scams | 18% |
With deepfake technology becoming more accessible and sophisticated, here are McAfee’s top tips to protect yourself:
As we move further into 2025, the threat of deepfake scams is likely to grow. While about half of Americans feel confident they can spot these scams, the technology is evolving rapidly. The best defense is staying informed, maintaining healthy skepticism, and using modern security tools designed to combat these AI-powered threats. Scams have evolved with AI, but so have defenses. Staying vigilant, leveraging advanced cybersecurity tools, and educating yourself can help you navigate the Scamiverse safely. As scammers grow smarter, so must we. Remember, if something seems off about a video call or message from a loved one or colleague, take a moment to verify through another channel. In the age of $5 deepfakes, that extra step could save you thousands of dollars and countless hours of stress.
The post State of the Scamiverse – How AI is Revolutionizing Online Fraud appeared first on McAfee Blog.
McAfee threat researchers have identified several consumer brands and product categories most frequently used by cybercriminals to trick consumers into clicking on malicious links in the first weeks of this holiday shopping season. As holiday excitement peaks and shoppers hunt for the perfect gifts and amazing deals, scammers are taking advantage of the buzz. The National Retail Federation projects holiday spending will reach between $979.5 and $989 billion this year, and cybercriminals are capitalizing by creating scams that mimic the trusted brands and categories consumers trust. From October 1 to November 12, 2024, McAfee safeguarded its customers from 624,346 malicious or suspicious URLs tied to popular consumer brand names – a clear indication that bad actors are exploiting trusted brand names to deceive holiday shoppers.
McAfee’s threat research also reveals a 33.82% spike in malicious URLs targeting consumers with these brands’ names in the run-up to Black Friday and Cyber Monday. This rise in fraudulent activity aligns with holiday shopping patterns during a time when consumers may be more susceptible to clicking on offers from well-known brands like Apple, Yeezy, and Louis Vuitton, especially when deals seem too good to be true – pointing to the need for consumers to stay vigilant, especially with offers that seem unusually generous or come from unverified sources.
McAfee threat researchers have identified a surge in counterfeit sites and phishing scams that use popular luxury brands and tech products to lure consumers into “deals” on fake e-commerce sites designed to appear as official brand pages. While footwear and handbags were identified as the top two product categories exploited by cybercrooks during this festive time, the list of most exploited brands extends beyond those borders:
By mimicking trusted brands like these, offering unbelievable deals, or posing as legitimate customer service channels, cybercrooks create convincing traps designed to steal personal information or money. Here are some of the most common tactics scammers are using this holiday season:
With holiday shopping in full swing, it’s essential for consumers to stay one step ahead of scammers. By understanding the tactics cybercriminals use and taking a few precautionary measures, shoppers can protect themselves from falling victim to fraud. Here are some practical tips for safe shopping this season:
McAfee’s threat research team analyzed malicious or suspicious URLs that McAfee’s web reputation technology identified as targeting customers, by using a list of key company and product brand names—based on insights from a Potter Clarkson report on frequently faked brands—to query the URLs. This methodology captures instances where users either clicked on or were directed to dangerous sites mimicking trusted brands. Additionally, the team queried anonymized user activity from October 1st through November 12th.
The image below is a screenshot of a fake / malicious / scam site: Yeezy is a popular product brand formerly from Adidas found in multiple Malicious/Suspicious URLs. Often, they present themselves as official Yeezy and/or Adidas shopping sites.
The image below is a screenshot of a fake / malicious / scam site: The Apple brand was a popular target for scammers. Many sites were either knock offs, scams, or in this case, a fake customer service page designed to lure users into a scam.
The image below is a screenshot of a fake / malicious / scam site: This particular (fake) Apple sales site used Apple within its URL and name to appear more official. Oddly, this site also sells Samsung Android phones.
The image below is a screenshot of a fake / malicious / scam site: This site, now taken down, is a scam site purporting to sell Nike shoes.
The image below is a screenshot of a fake / malicious / scam site: Louis Vuitton is a popular brand for counterfeit and scams. Particularly their handbags. Here is one site that was entirely focused on Louis Vuitton Handbags.
The image below is a screenshot of a fake / malicious / scam site: This site presents itself as the official Louis Vuitton site selling handbags and clothes.
The image below is a screenshot of a fake / malicious / scam site: This site uses too-good-to-be-true deals on branded items including this Louis Vuitton Bomber jacket.
The image below is a screenshot of a fake / malicious / scam site: Rolex is a popular watch brand for counterfeits and scams. This site acknowledges it sells counterfeits and makes no effort to indicate this on the product.
The post This Holiday Season, Watch Out for These Cyber-Grinch Tricks Used to Scam Holiday Shoppers appeared first on McAfee Blog.
Two-step verification, two-factor authentication, multi-factor authentication…whatever your social media platform calls it, it’s an excellent way to protect your accounts.
There’s a good chance you’re already using multi-factor verification with your other accounts — for your bank, your finances, your credit card, and any number of things. The way it requires an extra one-time code in addition to your login and password makes life far tougher for hackers.
It’s increasingly common to see nowadays, where all manner of online services only allow access to your accounts after you’ve provided a one-time passcode sent to your email or smartphone. That’s where two-step verification comes in. You get sent a code as part of your usual login process (usually a six-digit number), and then you enter that along with your username and password.
Some online services also offer the option to use an authenticator app, which sends the code to a secure app rather than via email or your smartphone. Authenticator apps work much in the same way, yet they offer three unique features:
Google, Microsoft, and others offer authenticator apps if you want to go that route. You can get a good list of options by checking out the “editor’s picks” at your app store or in trusted tech publications.
Whichever form of authentication you use, always keep that secure code to yourself. It’s yours and yours alone. Anyone who asks for that code, say someone masquerading as a customer service rep, is trying to scam you. With that code, and your username/password combo, they can get into your account.
Passwords and two-step verification work hand-in-hand to keep you safer. Yet not any old password will do. You’ll want a strong, unique password. Here’s how that breaks down:
Now, with strong passwords in place, you can get to setting up multi-factor verification on your social media accounts.
When you set up two-factor authentication on Facebook, you’ll be asked to choose one of three security methods:
And here’s a link to the company’s full walkthrough: https://www.facebook.com/help/148233965247823
When you set up two-factor authentication on Instagram, you’ll be asked to choose one of three security methods: an authentication app, text message, or WhatsApp.
And here’s a link to the company’s full walkthrough: https://help.instagram.com/566810106808145
And here’s a link to the company’s full walkthrough: https://faq.whatsapp.com/1920866721452534
And here’s a link to the company’s full walkthrough: https://support.google.com/accounts/answer/185839?hl=en&co=GENIE.Platform%3DDesktop
1. TapProfileat the bottom of the screen.
2. Tap the Menu button at the top.
3. Tap Settings and Privacy, then Security.
4. Tap 2-step verification and choose at least two verification methods: SMS (text), email, and authenticator app.
5. Tap Turn on to confirm.
And here’s a link to the company’s full walkthrough: https://support.tiktok.com/en/account-and-privacy/personalized-ads-and-data/how-your-phone-number-is-used-on-tiktok
The post How to Protect Your Social Media Passwords with Multi-factor Verification appeared first on McAfee Blog.
What is a botnet? And what does it have to do with a toaster?
We’ll get to that. First, a definition:
A botnet is a group of internet-connected devices that bad actors hijack with malware. Using remote controls, bad actors can harness the power of the network to perform several types of attacks. These include distributed denial-of-service (DDoS) attacks that shut down internet services, breaking into other networks to steal data, and sending massive volumes of spam.
In a way, the metaphor of an “army of devices” leveling a cyberattack works well. With thousands or even millions of compromised devices working in concert, bad actors can do plenty of harm. As we’ll see in a moment, they’ve done their share already.
Which brings us back to that toaster.
The pop-up toaster as we know it first hit the shelves in 1926, under the brand name “Toastmaster.”[i] With a familiar springy *pop*, it has ejected toast just the way we like it for nearly a century. Given that its design was so simple and effective, it’s remained largely unchanged. Until now. Thanks to the internet and so-called “smart home” devices.
Toasters, among other things, are all getting connected. And have been for a few years now, to the point where the number of connected Internet of Things (IoT) devices reaches well into the billions worldwide — which includes smart home devices.[ii]
Businesses use IoT devices to track shipments and various aspects of their supply chain. Cities use them to manage traffic flow and monitor energy use. (Does your home have a smart electric meter?) And for people like us, we use them to play music on smart speakers, see who’s at the front door with smart doorbells, and order groceries from an LCD screen on our smart refrigerators — just to name a few ways we’ve welcomed smart home devices into our households.
In the U.S. alone, smart home devices make up a $30-plus billion marketplace per year.[iii] However, it’s still a relatively young marketplace. And with that comes several security issues.
First and foremost, many of these devices still lack sophisticated security measures, which makes them easy pickings for cybercriminals. Why would a cybercriminal target that smart lightbulb in your living room reading lamp? Networks are only as secure as their least secure device. Thus, if a cybercriminal can compromise that smart lightbulb, it can potentially give them access to the entire home network it is on — along with all the other devices and data on it.
More commonly, though, hackers target smart home devices for another reason. They conscript them into botnets. It’s a highly automated affair. Hackers use bots to add devices to their networks. They scan the internet in search of vulnerable devices and use brute-force password attacks to take control of them.
At issue: many of these devices ship with factory usernames and passwords. Fed with that info, a hacker’s bot can have a relatively good success rate because people often leave the factory password unchanged. It’s an easy in.
Results from one real-life test show just how active these hacker bots are:
We created a fake smart home and set up a range of real consumer devices, from televisions to thermostats to smart security systems and even a smart kettle – and hooked it up to the internet.
What happened next was a deluge of attempts by cybercriminals and other unknown actors to break into our devices, at one stage, reaching 14 hacking attempts every single hour.
Put another way, that hourly rate added up to more than 12,000 unique scans and attack attempts a week.[iv] Imagine all that activity pinging your smart home devices.
Now, with a botnet in place, hackers can wage the kinds of attacks we mentioned above, particularly DDoS attacks. DDoS attacks can shut down websites, disrupt service and even choke traffic across broad swathes of the internet.
Remember the “Mirai” botnet attack of 2016, where hackers targeted a major provider of internet infrastructure?[v] It ended up crippling traffic in concentrated areas across the U.S., including the northeast, Great Lakes, south-central, and western regions. Millions of internet users were affected, people, businesses, and government workers alike.
Another more recent set of headline-makers are the December 2023 and July 2024 attacks on Amazon Web Services (AWS).[vi],[vii] AWS provides cloud computing services to millions of businesses and organizations, large and small. Those customers saw slowdowns and disruptions for three days, which in turn slowed down and disrupted the people and services that wanted to connect with them.
Also in July 2024, Microsoft likewise fell victim to a DDoS attack. It affected everything from Outlook email to Azure web services, and Microsoft Office to online games of Minecraft. They all got swept up in it.[viii]
These attacks stand out as high-profile DDoS attacks, yet smaller botnet attacks abound, ones that don’t make headlines. They can disrupt the operations of websites, public infrastructure, and businesses, not to mention the well-being of people who rely on the internet.
Earlier we mentioned the problem of unchanged factory usernames and passwords. These include everything from “admin123” to the product’s name. Easy to remember, and highly insecure. The practice is so common that they get posted in bulk on hacking websites, making it easy for cybercriminals to simply look up the type of device they want to attack.
Complicating security yet further is the fact that some IoT and smart home device manufacturers introduce flaws in their design, protocols, and code that make them susceptible to attacks.[ix] The thought gets yet more unsettling when you consider that some of the flaws were found in things like smart door locks.
The ease with which IoT devices can be compromised is a big problem. The solution, however, starts with manufacturers that develop IoT devices with security in mind. Everything in these devices will need to be deployed with the ability to accept security updates and embed strong security solutions from the get-go.
Until industry standards get established to ensure such basic security, a portion of securing your IoT and smart home devices falls on us, as people and consumers.
As for security, you can take steps that can help keep you safer. Broadly speaking, they involve two things: protecting your devices and protecting the network they’re on. These security measures will look familiar, as they follow many of the same measures you can take to protect your computers, tablets, and phones.
Grab online protection for your smartphone.
Many smart home devices use a smartphone as a sort of remote control, not to mention as a place for gathering, storing, and sharing data. So whether you’re an Android owner or iOS owner, use online protection software on your phone to help keep it safe from compromise and attack.
Don’t use the default — Set a strong, unique password.
One issue with many IoT devices is that they often come with a default username and password. This could mean that your device and thousands of others just like it all share the same credentials, which makes it painfully easy for a hacker to gain access to them because those default usernames and passwords are often published online. When you purchase any IoT device, set a fresh password using a strong method of password creation, such as ours. Likewise, create an entirely new username for additional protection as well.
Use multi-factor authentication.
Online banks, shops, and other services commonly offer multi-factor authentication to help protect your accounts — with the typical combination of your username, password, and a security code sent to another device you own (often a mobile phone). If your IoT device supports multi-factor authentication, consider using it there too. It throws a big barrier in the way of hackers who simply try and force their way into your device with a password/username combination.
Secure your internet router too.
Another device that needs good password protection is your internet router. Make sure you use a strong and unique password as well to help prevent hackers from breaking into your home network. Also, consider changing the name of your home network so that it doesn’t personally identify you. Fun alternatives to using your name or address include everything from movie lines like “May the Wi-Fi be with you” to old sitcom references like “Central Perk.” Also check that your router is using an encryption method, like WPA2 or the newer WPA3, which keeps your signal secure.
Upgrade to a newer internet router.
Older routers might have outdated security measures, which might make them more prone to attacks. If you’re renting yours from your internet provider, contact them for an upgrade. If you’re using your own, visit a reputable news or review site such as Consumer Reports for a list of the best routers that combine speed, capacity, and security.
Update your apps and devices regularly.
In addition to fixing the odd bug or adding the occasional new feature, updates often fix security gaps. Out-of-date apps and devices might have flaws that hackers can exploit, so regular updating is a must from a security standpoint. If you can set your smart home apps and devices to receive automatic updates, that’s even better.
Set up a guest network specifically for your IoT devices.
Just as you can offer your guests secure access that’s separate from your own devices, creating an additional network on your router allows you to keep your computers and smartphones separate from IoT devices. This way, if an IoT device is compromised, a hacker will still have difficulty accessing your other devices on your primary network, the one where you connect your computers and smartphones.
Shop smart.
Read trusted reviews and look up the manufacturer’s track record online. Have their devices been compromised in the past? Do they provide regular updates for their devices to ensure ongoing security? What kind of security features do they offer? And privacy features too? Resources like Consumer Reports can provide extensive and unbiased information that can help you make a sound purchasing decision.
As more and more connected devices make their way into our homes, the need to ensure that they’re secure only increases. More devices mean more potential avenues of attack, and your home network is only as secure as the least secure device that’s on it.
While standards put forward by industry groups such as UL and Matter have started to take root, a good portion of keeping IoT and smart home devices secure falls on us as consumers. Taking the steps above can help prevent your connected toaster from playing its part in a botnet army attack — and it can also protect your network and your home from getting hacked.
It’s no surprise that IoT and smart home devices have raked in billions of dollars over the years. They introduce conveniences and little touches into our homes that make life more comfortable and enjoyable. However, they’re still connected devices. And like anything that’s connected, they must be protected.
[i] https://www.hagley.org/librarynews/history-making-toast
[ii] https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/
[iii] https://www.statista.com/outlook/dmo/smart-home/united-states
[iv] https://www.which.co.uk/news/article/how-the-smart-home-could-be-at-risk-from-hackers-akeR18s9eBHU
[v] https://en.wikipedia.org/wiki/Mirai_(malware)
[vi] https://www.darkreading.com/cloud-security/eight-hour-ddos-attack-struck-aws-customers
[vii] https://www.forbes.com/sites/emilsayegh/2024/07/31/microsoft-and-aws-outages-a-wake-up-call-for-cloud-dependency/
[viii] https://www.bbc.com/news/articles/c903e793w74o
[ix] https://news.fit.edu/academics-research/apps-for-popular-smart-home-devices-contain-security-flaws-new-research-finds/
The post What Is a Botnet? appeared first on McAfee Blog.
As we honor Veterans Day, it’s crucial to recognize not only the sacrifices made by those who served but also the unique cybersecurity challenges they face in today’s digital age. Veterans, with their deep ties to sensitive military information and benefits, are increasingly being targeted by cybercriminals seeking to exploit their personal data. Seven in 10 military vets and active-duty service members have been a victim of at least one digital crime.
From phishing scams impersonating official VA communications to the risk of military identity theft, veterans encounter specific threats that require tailored cybersecurity awareness and precautions. By taking proactive steps, veterans can implement strong security practices to better protect their identities and enjoy a safer online experience.
Veterans possess a wealth of sensitive information tied to their military service. This includes not only Social Security numbers, medical records, and details about deployments and benefits, but also personal histories that can include addresses, family information, and even details about combat experiences. Such comprehensive information is highly valuable to cybercriminals for various malicious activities, including identity theft and financial fraud.
Cybercriminals can exploit this data to impersonate veterans, gain unauthorized access to financial accounts, file false claims for VA benefits, or sell the information on the dark web. The repercussions of such breaches extend beyond financial loss, impacting veterans’ reputations, access to essential services, and overall peace of mind. Safeguarding this sensitive data is critical to ensuring veterans’ security and well-being in the digital age.
One of the primary threats that veterans encounter is phishing scams. These scams often impersonate official communications from the Department of Veterans Affairs (VA) or other military organizations. Cybercriminals use deceptive emails, text messages, or phone calls to trick veterans into revealing personal information or clicking malicious links that can compromise their devices.
Another prevalent danger is military identity theft, where criminals use stolen or fabricated military credentials to access benefits, obtain loans, or commit fraud in the veteran’s name. This type of identity theft can be particularly devastating, affecting not only financial stability but also the veteran’s reputation and access to crucial services.
In 2023, military consumers filed more than 93,000 fraud complaints, with imposter scams alone accounting for 42,766 cases, resulting in reported losses exceeding $178 million. To combat these threats, veterans must be equipped with robust cybersecurity awareness and practices:
If you think you have been the victim of identity theft, immediately take steps to protect yourself and your family:
As veterans continue to navigate the complexities of modern life, safeguarding their personal information online is paramount. By staying informed about cybersecurity best practices and leveraging available resources, veterans can significantly reduce their risk of falling victim to cyber threats.
The post Safeguarding Those Who Served: Cybersecurity Challenges for Veterans appeared first on McAfee Blog.
As Black Friday approaches, eager bargain hunters are gearing up to snag the best deals online. But with the excitement of holiday shopping also comes the risk of cyber threats, as cybercriminals see this busy time as an opportunity to exploit unsuspecting shoppers. Here’s what you need to know to protect yourself from potential risks while scoring your favorite holiday deals.
Authorities are already sounding the alarm about the risks associated with online shopping during the festive season. Cybersecurity agencies, including the UK’s National Cyber Security Centre (NCSC) and the Canadian Royal Canadian Mounted Police (RCMP), have warned that cybercriminals are using increasingly sophisticated tactics, including leveraging AI to create more convincing scams, malicious ads, and spoofed websites. In the United States, the FBI and Cybersecurity and Infrastructure Security Agency (CISA) have issued advisories to stay vigilant against ransomware attacks during holiday periods when many businesses operate with minimal staff. Cybercriminals take advantage of widely celebrated holidays like Black Friday to launch impactful attacks.
Modern AI tools have made it easier for scammers to create:
During the bustling shopping period that spans Thanksgiving, Black Friday, Small Business Saturday, and Cyber Monday, online sales hit record highs, and cybercriminals follow the money trail. Here are some of the most common scams to watch out for and ways to protect yourself.
Phishing attacks often involve fake emails or social media messages that mimic legitimate promotional offers or shipping notifications. These messages are designed to trick you into revealing sensitive information, such as credit card details, or to download malware onto your device. Common tactics include sending fake order confirmations or gift card scams, which pressure recipients to act quickly by purchasing gift cards to resolve a fabricated issue.
Fake websites that imitate popular online retailers pop up frequently during the Black Friday shopping season. These sites may look identical to the real thing, but their sole purpose is to steal your payment information.
Malicious advertisements can infiltrate legitimate websites, leading you to infected sites that install malware on your device. E-skimming occurs when hackers insert malicious code into payment pages on legitimate eCommerce sites, stealing your credit card information during checkout.
During the busy holiday season, identity theft and credit card fraud rise sharply. Cybercriminals use stolen personal information to make fraudulent purchases or open accounts in your name.
Here are some extra tips to keep your online shopping secure during the holiday season:
While Black Friday is a fantastic time to grab deals, it’s also a time to be extra cautious. By understanding common threats and following these safety tips, you can enjoy your holiday shopping while minimizing the risks. Remember, If a deal seems too good to be true, it probably is. Legitimate retailers won’t pressure you into quick decisions or require unusual payment methods. Take your time, verify offers, and trust your instincts.
The best defense against AI scams is a careful, methodical approach to holiday shopping. Create a budget, make a list of what you want to buy, and stick to trusted retailers. A missed deal is better than falling victim to a scam.
The post How To Protect Yourself from Black Friday and Cyber Monday AI Scams appeared first on McAfee Blog.
As malicious deepfakes continue to flood our screens with disinformation during this election year, we’ve released our 2024 Election AI Toolkit to help voters protect themselves and their vote.
Our own research reveals just how deep the problem runs. More than six in ten (63%) of Americans said they’ve seen a deepfake in the past 60 days. As for the impact of those deepfakes, nearly half (48%) who’ve seen one said it’s influenced who they’ll vote for in the upcoming election.
In all, we found that 91% of Americans said they’re concerned that AI-generated disinformation could interfere with public perception of candidates, their platforms, or even election results.
Disinformation has played a long and shady role in politics. For some time now. George Washington fell victim to it in 1777 when forged letters painted him as a British sympathizer — disinformation that followed him to the first presidency. [i]
And it’s appeared on the internet for some time too. For years, creating disinformation on the internet called for plenty of manual labor. Writers, designers, and developers all put hours into writing, creating images, and creating sites for spreading disinformation. Now, it takes just one person mere minutes. The advent of cheap and free AI tools has put disinformation into overdrive.
We’ve seen an explosive rise in malicious deepfakes in the run-up to Election Day.
With polling in some states already underway, we can expect the glut of malicious deepfakes to continue. They might:
With that, it’s little surprise that nearly 60% of Americans say that they’re extremely or very concerned about AI’s influence on the election.[vi] Deepfakes have simply become pervasive.
AI has given new life to the old problem of disinformation and fake news. In many ways, it’s supercharged it.
It’s done so in two primary ways:
In all, it’s easier, cheaper, and quicker than ever to create malicious deepfakes with AI tools. On top of that, the image and sound quality of deepfakes continues to improve. In all, it’s only getting tougher when it’s time to tell the difference between what’s real and what’s fake.
Taken together, this has put voters in a lurch. Who and what can they trust online?
Even as the creators of malicious AI-generated content have gotten cagier in their ways, their work still gives off signs of a fake. However, spotting this malicious content calls for extra effort on everyone’s part when getting their news or scrolling their feeds online. That means scrutinizing what we consume and relying on trusted fact-checking resources to get at the truth. It also means using AI as any ally, with AI tools that detect AI deepfakes in real time.
Our Election Year Toolkit will help you do just that. It covers the basics of fake news and malicious AI deepfakes, how to spot them, and more. As you’ll see, it’s a topic both broad and deep, and we explore it in a step-by-step way that helps make sense of it all for voters.
Sharing info about AI with voters is one of several steps we’ve taken to fight against malicious deepfakes.
In a first-of-its-kind collaboration, we’ve teamed up with Yahoo News to bolster the credibility of images on the Yahoo News platform. This collaboration integrates McAfee’s sophisticated deepfake image detection technology into Yahoo News’s content quality system, offering readers an added layer of trust.
And we’re rolling out our McAfee Deepfake detector through our partners too. It checks audio being played through your browser to figure out if the content you’re watching or listening to contains AI-generated audio. When AI audio is detected, users are notified in seconds.
AI makes disinformation look and sound far more credible than ever. And bad actors can produce it on a tremendous scale, thanks to the ease and speed of AI tools. In an election year that calls for more scrutiny on our collective part — and our 2024 Election AI Toolkit can help. It covers how to spot a deepfake, how they spread, and several fact-checking resources that you can rely on when that bit of news you stumble across seems a little sketchy.
Download the full McAfee AI Election Toolkit here
[i] https://www.politifact.com/article/2022/feb/21/when-george-washington-fought-misinformation/
[v] https://techcrunch.com/2024/03/06/political-deepfakes-are-spreading-like-wildfire-thanks-to-genai/
The post How To Survive the Deepfake Election with McAfee’s 2024 Election AI Toolkit appeared first on McAfee Blog.
Thinking about deleting your TikTok account? We can show you how.
Before we get to that, you might be interested to find what kind of data TikTok collects about you — and how long TikTok keeps your account data, even after you delete it.
For that, we turn to TikTok’s privacy policy page.[i] TikTok collects data just like practically any other social media platform, and the list of what they collect runs long. You can see a full list in their privacy policy, yet here are a few things you might want to know about. Per TikTok:
So, TikTok knows the content you create, the content you appear in, and the messages you send (and the specific contents of those messages) — and potentially payment info and the people in your phone contacts. Additionally, it collects info on you from other sources and on any purchases you might have made through the platform.
The list continues. Once again, you can visit their privacy policy page for more details, yet here’s a partial rundown of other data they collect about you automatically. Per TikTok:
As for how long they keep all that data and info they collect, the answer is unclear. Per TikTok,
“We retain information for as long as necessary to provide the Platform and for the other purposes set out in this Privacy Policy. We also retain information when necessary to comply with contractual and legal obligations, when we have a legitimate business interest to do so (such as improving and developing the Platform and enhancing its safety, security, and stability), and for the exercise or defense of legal claims.” [ii]
The key phrases here are “as long as necessary” and “when necessary.” TikTok doesn’t set a specific period in its policy. In fact, TikTok goes on to say that the periods vary based on “different criteria, such as the type of information and the purposes for which we use the information.”
Now, onto the steps for deleting your TikTok account.
Note that TikTok provides a 30-day grace period once you delete your account. If you want to hop back onto the platform, you can simply reactivate your account during that period. All your info, data, and posts will be there. After those 30 days, you’ll no longer have access to them.
We suggest one more step in addition to the ones above.
Here’s why you might want to do that … Given the way social media companies share info with third parties, there’s a chance your personal info might have made it onto one or several data broker sites. These sites buy and sell extensive lists of personal to anyone, which ranges anywhere from advertisers to spammers and scammers.
If the thought of your personal info being bought and sold puts you off, there’s something you can do about it. Our Personal Data Cleanup service can scan some of the riskiest data broker sites and show you which ones are selling your personal info. It also provides guidance on how you can remove your data from those sites, and with select products, it can even manage the removal for you.
[i] https://www.tiktok.com/legal/page/row/privacy-policy/en
[ii] https://www.tiktok.com/legal/page/row/privacy-policy/en
The post How to Delete Your TikTok Account appeared first on McAfee Blog.
What is oversharing on social media? And how do you avoid it?
Oversharing on social media takes on a couple different aspects. There’s one that’s personal, like what you share and how often you share it. Another revolves around your privacy and your security. Namely, how does what you share and how often you share it affect your privacy — and what further effect does that have on your security? Does it open you up to scams, identity theft, and other forms of cybercrime?
A grasp on that can help you avoid oversharing and post on social media in a way that’s “just right.”
Granted, it might seem a little odd to talk about privacy and the like on social media, which is, by definition, social in nature. The idea, though, is striking a balance — getting all the benefits of connection and keeping up with people and groups that matter to you in a way that’s enjoyable and safe. And healthy too.
Let’s start with a look at what oversharing looks like and its possible effects. From there, we can check out some specific ways you can avoid oversharing on social media.
For starters, oversharing usually conjures up the notion of T.M.I., or “too much information.” That might involve posting too often, yet it can also involve sharing too many personal details. Along those lines, a long-standing definition of oversharing goes like this:
“The excessive generosity with information about one’s private life or the private lives of others.”[i]
Of course, “excessive” is a relative term. Different people have different boundaries when it comes to what’s personal. Likewise, the people reading a post have different ideas of what counts as sharing “too much” and what doesn’t.
Further complicating the matter is how many people choose to have multiple accounts on the same platform.
In particular, teens and younger adults often have a broader public account with many followers along with a more private account that they share with select friends. A post that might be fine, and expected, on a private account might come across as an overshare on a public account.
However, there are cases where oversharing can point to deeper issues, like anxiety, depression, and unhealthy attention-seeking behavior. So-called “sadfishing” offers one example, where people create negative posts in a bid to get sympathy. Other examples include sharing details about oneself online that a person would normally never share on a phone call or in a face-to-face conversation.
If you have concerns about yourself or someone you know, confide in someone you trust for advice. See if they have the same concerns as you do. Also, in the U.S., you can speak to speak to a licensed counselor through the “988” service, which you can learn more about at https://988lifeline.org. It’s free and confidential.
When it comes to privacy and security, oversharing takes on a different meaning. Elsewhere in our blogs, we’ve talked about that issue like this:
“Saying more than you should to more people than you should.”
Now, here’s where your privacy and security come in. Consider the audience you have across your social media profiles. Perhaps you have dozens, if not hundreds of friends and followers. All with various degrees of closeness and familiarity. Post something personal on social media to that broad audience, and you indeed might end up sharing something that puts your personal privacy and security at risk. After all, if you have hundreds of followers, how many of them are people you truly know and absolutely trust?
Here are a few scenarios:
In other words, social media posts have a way of saying much more than we might think. And when shared publicly or to a large audience of friends and followers you don’t know well, that can expose you in ways you might not want.
As with so many things online, staying safer and more private calls for a mix of technology and internet street smarts. Things like settings, privacy tools, and what you post can help you enjoy social media safely.
Be more selective with your settings.
Social media platforms like Facebook, Instagram, and others give you the choice of making your profile and posts visible to friends only. Choosing this setting keeps the broader internet from seeing what you’re doing, saying, and posting — not to mention your relationships and likes. (Think of your social media profile showing up in a Google search.) Taking a “friends only” approach to your social media profiles can help protect your privacy because that gives a possible scammer or stalker much less material to work with.
Some platforms further allow you to create sub-groups of friends and followers. With a quick review of your network, you can create a sub-group of your most trusted friends and restrict your posts to them as needed.
Stay on top of your privacy with our Social Privacy Manager.
Here’s the thing with those social media settings — they can be challenging to locate and confusing to adjust. In all, it can take time to make sure that your info and posts are only shown to people you want to see them. Our Social Privacy Manager can do that work for you.
Based on your preferences, it adjusts more than 100 privacy settings across your social media accounts in just a few clicks. This way, your personal info is only visible to the people you want to share it with.
Say “no” to bots and bogus accounts.
There are plenty of fake accounts out there on social media. On Facebook, the platform acted on 1.2 billion fake accounts between April and June 2024 alone.[iii] On X, formerly Twitter, the platform announced a “bot purge” in 2024. However, in May 2023, the platform suspended access to a publicly available data set that helped find and track bots on the platform. Still, researchers continue to find false accounts, particularly ones powered by AI tools.[iv]
The bottom line is this: don’t accept invites from people you don’t know. Bad actors might use them to launch scams, gather personal info on potential identity theft victims, and spread disinformation. Also, be aware that some followers might not be who they appear to be. In the immediate wake of the “bot purge” on X, many accounts saw themselves losing thousands of followers.[v]
Consider what you post.
Think about posting those vacation pictures after you get back home, so people don’t know you’re away when you’re away. Also, consider if your post pinpoints where you are or where you go regularly. Do you want people in your broader network to know that? Closely review the pics you take and see if there’s any revealing information in the background. If so, you can crop it out (think notes on a whiteboard, reflections in a window, or revealing location info). Further, ask anyone you want to include in their post for their permission. In all, consider their privacy too.
Consider what you post about others, too.
Indeed, oversharing can include what you post and say about others online as well. A good rule of thumb when posting group pictures online is to ask if the other people in them are okay with it going onto social media. Also ask yourself, “Is this my news to share?” For example, a friend leaves one job to take on a new role elsewhere. Before posting, “Congrats on the new job!” let them make that first announcement themselves.
For parents, this calls for extra consideration too. Anything you post about your child becomes a part of their permanent online record. What might seem funny or cute today might become embarrassing or even fodder for cyberbullies tomorrow.
Yes, you give up some privacy by using social media. That’s the very nature of it. The trick is in sharing just enough and with just the right people.
Being careful of who you accept as a friend, keeping an eye on accounts that follow you, and paying mind to what you post and how often are all ways you can prevent oversharing. Likewise, using tools to fine-tune who sees your posts, keeping things to close friends in sub-groups or secondary accounts, and keeping your social media accounts out of the public eye are yet more steps you can take to protect yourself, your privacy, and your security on social media.
[i] https://portal.research.lu.se/en/publications/front-and-backstage-in-social-media
[ii] https://www.theguardian.com/world/2019/oct/11/japanese-assault-suspect-tracked-down-pop-star-via-eye-reflection-in-selfie
[iii] https://transparency.meta.com/reports/community-standards-enforcement/fake-accounts/facebook
[iv] https://arxiv.org/pdf/2307.16336
[v] https://www.socialmediatoday.com/news/x-formerly-twitter-bot-purge-sees-big-accounts-lose-followers/712495/
The post How to Avoid Oversharing on Social Media appeared first on McAfee Blog.
What is malware? A dictionary-like definition is “malicious software that attacks computers, smartphones, and other connected devices.”
In fact, “malware” is a mash-up of “malicious software.” It describes any type of software or code specifically designed to exploit a connected device or network without consent. And, unsurprisingly, hackers design most of it for financial gain.
Think of malware as an umbrella term that covers an entire host of “bad stuff,” such as:
Spyware that tracks activity, like what you type and where you type it. (Think snooping on your bank account logins.
Ransomware that holds devices or the data on them hostage, that hackers only release for a price. (And even so, payment is no guarantee you’ll get back your access.)
Adware that serves up spammy ads on your device. (The hacker gets paid for the number of “impressions” the ads have. The more they show up on people’s devices, the more they get paid.)
Botnet software, that hijacks a device into a remote-controlled network of other devices. (These networks are used to shut down websites or even shut down large portions of the internet, just to mention two of the things they can do.)
Rootkit that attacks that give hackers remote-control access to a device. (And with that control, they can wage all manner of attacks — on the device and on other devices too.)
Viruses that modify the way a device and its apps function. Also, they can effectively bring a device or network to a grinding halt. (Yes, viruses are a subset of malware. They can copy, delete, and steal data, among other things.)
You might know malware by its more commonly used name — viruses.
There’s a pretty good reason why people commonly refer to malware as a “virus.” Viruses have been on our collective minds for some time.
Viruses have a long history. You could call it “the original malware.” And depending on how you define what a virus is, the first one took root in 1971 — more than 50 years ago. It was known as Creeper, and rather than being malicious in nature, the creator designed it to show how a self-replicating program could spot other devices on a network, transfer itself to them, and find yet more devices to repeat the process. Later, the same programmer who created a refined version of Creeper developed Reaper, a program that could remove the Creeper program. In a way, Reaper could be considered the first piece of antivirus software.[i]
From there, it wasn’t until the 1980s that malware started affecting the broader population, a time when computers became more commonplace in businesses and people’s homes.
At first, malware typically spread by infected floppy disks, much like the “Brain” virus in 1986. While recognized today as the first large-scale computer virus, its authors say they never intended it to work that way. Rather, they say they created Brain as an anti-piracy measure to protect their proprietary software from theft. However, Brain got loose. It went beyond their software and affected computers worldwide. Although not malicious or destructive in nature, Brain most certainly put the industry, businesses, and consumers on notice. Computer viruses were a thing.[ii]
Another piece of malware that got passed along via floppy disks was the “PC Cyborg” attack that targeted the medical research community in and around 1989. There, the malware would lie in wait until the user rebooted their computer for the 90th time and was presented with a digital ransom note.[iii]
An early example of ransomware – Source, Wikipedia
Upon that 90th boot, PC Cyborg encrypted the computer’s files, which would only get unencrypted if the victim paid a fee, making it the first documented form of ransomware.
Shortly thereafter, the internet started connecting computers, which opened millions of doors for hackers as people went online. Among the most noteworthy was 1999’s “Melissa” virus, which spread by way of infected email attachments and overloaded hundreds of corporate and governmental email servers worldwide.
It was quickly followed in 2000 by what’s considered among the most damaging malware to date — ILOVEYOU, which also spread by way of an attachment, this one posing as a love letter. Specifically, it was a self-replicating worm that installed itself on the victim’s computer where it destroyed some info and stole other info, then spread to other computers. One estimate put the global cost of ILOVEYOU at $10 billion. It further speculated that it infected 10% of the world’s internet-connected computers at the time.[iv]
With that history, it’s no surprise that anti-malware software is commonly called “antivirus.”
Antivirus forms a major cornerstone of online protection software. It protects your devices against malware through a combination of prevention, detection, and removal. Our antivirus uses AI to detect the absolute latest threats — and has for several years now.
Today, McAfee registers more than a million new malicious programs and potentially unwanted apps (PUA) each day, which contributes to the millions and millions already in existence. Now with the arrival of AI-powered coding tools, hackers can create new strains at rates unseen before.
That’s another reason why we use AI in our antivirus software. We use AI to protect against AI-created malware. It does so in three ways:
Once again, it’s important to remind ourselves that today’s malware is created largely for profit. Hackers use it to gain personal and financial info, either for their own purposes or to sell it for profit. The files you have stored on your devices have a street value. That includes tax returns, financial docs, payment info, and so on. Moreover, when you consider all the important things you keep on your devices, like your photos and documents, those have value too. Should you get caught up in a ransomware attack, a hacker puts a price tag on them for their return.
Needless to say, and you likely know this already, antivirus is essential for you and your devices.
You’ll find our AI-powered antivirus in all our McAfee+ plans. Better yet, our plans have dozens of protections that block the ways hackers distribute malware. To name just a few, our Text Scam Detector blocks links to suspicious sites that host malware and other attacks — and our Web Protection does the same for your browser. It also includes our industry-first online protection score that shows you just how safe you are, along with suggestions that can make you safer still. Together, our McAfee+ plans offer more than just antivirus. They protect your devices, your privacy, and your identity overall.
[i] https://www.historyofinformation.com/detail.php?entryid=2860
[ii] https://www.historyofinformation.com/detail.php?id=1676
[iii] https://www.theatlantic.com/technology/archive/2016/05/the-computer-virus-that-haunted-early-aids-researchers/481965/
[iv] https://www.forbes.com/sites/daveywinder/2020/05/04/this-20-year-old-virus-infected-50-million-windows-computers-in-10-days-why-the-iloveyou-pandemic-matters-in-2020
The post What is Malware? appeared first on McAfee Blog.
The number of AI-powered fake news sites has now surpassed the number of real local newspaper sites in the U.S.
How? AI tools have made creating entire fake news sites quicker and easier than before — taking one person minutes to create what once took days for dozens and dozens of people.
Researchers say we crossed this threshold in June 2024, a “sad milestone” by their reckoning.[i] As traditional, trusted sources of local news shut down, they’re getting replaced with sensationalistic and often divisive fake news sites. What’s more, many of these fake news sites pose as hometown newspapers.
They’re anything but.
These sites produce disinformation in bulk and give it a home. In turn, the articles on these fake news sites fuel social media posts by the thousands and thousands. Unsuspecting social media users fall for the clickbait-y headlines, click the links, read the articles, and get exposed to yet more “news” on those sites – which they then share on their social feeds thinking the stories are legit. And the cycle continues.
As a result, social media feeds find themselves flooded with falsehoods, misrepresentations, and flat-out lies. Researchers spotted the first of them in mid-2023, and they number of them are growing rapidly today.
In all, the rise of AI-powered fake news sites now plays a major role in the spread of disinformation.
When we talk about so-called “fake news,” we’re really talking about disinformation and misinformation. You might see and hear those two terms used interchangeably. They’re different, yet they’re closely related.
Disinformation is intentionally spreading misleading info.
Misinformation is unintentionally spreading misleading info (the person sharing the info thinks it’s true).
This way, you can see how disinformation spreads. A bad actor posts a deepfake with deliberately misleading info — a form of disinformation. From there, others take the misleading info at face value and pass it along as truth via social media — a form of misinformation.
The bad actors behind disinformation campaigns know this relationship well. Indeed, they feed it. In many ways, they rely on others to amplify their message for them.
With that, we’re seeing an explosion of fake news sites with content nearly, if not entirely, created by AI — with bad actors pushing the buttons.
Funded by partisan operations in the U.S. and by disinformation operations abroad, these sites pose as legitimate news sources yet push fake news that suits their agenda — whether to undermine elections, tarnish the reputation of candidates, create rifts in public opinion, or simply foster a sense of unease.
One media watchdog organization put some striking figures to the recent onrush of fake news sites. In May 2023, the organization found 49 sites that it defined as “Unreliable AI-Generated News Websites,” or UAINS. In February 2024, that number grew to more than 700 UAINS.[ii]
Per the watchdog group, these sites run with little to no human oversight. Additionally, they try to pass themselves off as legitimate by presenting their AI “authors” as people.[iii] Brazenly, at least one publisher had to say this when confronted with the fact that his “reporter” bylines were really AI bots:
The goal was to create “AI personas” that can eventually “grow into having their own following,” maybe even one day becoming a TV anchor. “Each AI persona has a unique style … Some sort of — this is probably not the right word — personality style to it.” [iv]
Beyond spreading disinformation, these sites are profitable. Recent research found that among the top 100 digital advertisers, 55% of them had their ads placed on disinformation sites. Across all industries and brands, 67% of those with digital ads wound up on disinformation sites.[v]
To clarify, these advertisers support these disinformation sites unwittingly. The researchers cite the way that online advertising platforms algorithmically place ads on various sites as the culprit. Not the advertisers themselves.
So as we talk about disinformation sites cropping up at alarming rates, we also see bad actors profiting as they prop them up.
Follow-up research pushes the estimated number of AI-powered fake news sites yet higher. In June, analysts discovered 1,265 sites targeting U.S. internet users with fake news – many posing as “local” news outlets. Shockingly, that figure surpasses the number of local newspapers still running in the U.S., at 1,213 outlets.[vi] (Side note: between 2005 and 2022, some 2,500 local newspapers shuttered in the U.S.[vii])
The actors and interests behind these sites follow a straightforward formula. In word salad fashion, they’ll mix the name of a town with classic publication names like Times, Post, or Chronicle to try to give themselves an air of credibility. Yet the content they post is anything but credible. AI generates the content from tip-to-tail, all to suit the disinformation the site wants to pump out.
The U.S. isn’t alone here. Similar sites have cropped up in the European Union as well. The European Union’s Disinformation Lab (EU DisinfoLab) found that outside actors mimicked several legitimate European sites and used them to spread disinformation.[viii] Legitimate sites that outside actors mimicked included Bild, The Guardian, and the NATO website.
The answer is that it’s getting tougher and tougher.
Fake news sites once gave off several cues that they were indeed fake, whether because they were created by earlier, cruder versions of AI tools or by human content creators. They simply didn’t look, feel, or read right. That’s because it took a lot of manual work to create a fake news site and make it look legitimate.
For starters, the site needed a sharp visual design and an easy way of surfacing articles to readers. It also meant cooking up a virtual staff, including bios of owners, publishers, editors, and bylines for the writers on the site. It also called for creating credible “About” pages and other deeper site content that legitimate news sites feature. Oh, and it needed a nice logo too. Then, and only then, could the actors behind these sites start writing fake news articles.
Now, AI does all this in minutes.
The Poynter Institute for Media Studies, a non-profit journalism school and research organization, showed how it indeed took minutes using several different AI tools.[ix] One tool created fake journalists, along with backgrounds, bylines, and photos. Another tool provided the framework of web code to design and build the site. As for the articles themselves, a few prompts into ChatGPT wrote serviceable, if not bland, articles in minutes as well.
As a result, these sites can look “real enough” to casual viewers. Taken at face value, all the trappings of a legitimate news site are there, with one exception — the articles. They’re fake. And they go on to do the damage that the bad actors behind them want them to do.
The people who create these fake news sites rely on others to take the lies they push at face value — and then immediately react to the feelings they stir up. Outrage. Anger. Dark joy. Without pause. Without consideration. If an article or post you come across online acts taps into those emotions, it’s a sure-fire sign you should follow up and see if what you’ve stumbled across is really real.
Here are a few things you can do:
Seek out objective reporting.
Outside of a newspaper’s Op-Ed pages where editorial opinions get aired, legitimate editorial staff strive for objectivity—reporting multiple dimensions of a story and letting the facts speak for themselves. If you find articles that are blatantly one-sided or articles that blast one party while going excessively easy on another, consider that type of reporting a red flag.
Watch out for clickbait.
Sensationalism, raw plays to emotion, headlines that conjure outrage — they’re all profitable because they stir people up and get them to click. Content like this is the hallmark of fake news, and it’s certainly the hallmark of AI-powered fake news as well. Consider stories like these as red flags as well.
Use fact-checking resources.
Come across something questionable? Still uncertain of what you’re seeing? You can turn to one of the several fact-checking organizations and media outlets that make it their business to separate fact from fiction. Each day, they assess the latest claims making their way across the internet — and then figure out if they’re true, false, or somewhere in between.
Check other known and long-standing news sources.
Search for other reputable sources and see what they’re saying on the topic. If anything at all. If the accounts differ, or you can’t find other accounts at all, that might be a sign you’re looking at fake news.
Additionally, for a list of reputable information sources, along with the reasons they’re reputable, check out “10 Journalism Brands Where You Find Real Facts Rather Than Alternative Facts.” It’s published by Forbes and authored by an associate professor at The King’s College in New York City.[x] It certainly isn’t the end-all, be-all of lists, yet it provides you with a good starting point. Both left-leaning and right-leaning editorial boards are included in the list for balance.
Stick with trusted voter resources.
With Election Day coming around here in the U.S., expect many bad actors to push false voting info, polling results, and other fake news that tries to undermine your vote. Go straight to the source for voting info, like how to register, when, where, and how to vote — along with how to confirm your voting registration status. You can find all this info and far more with a visit to https://www.usa.gov/voting-and-elections.
You can find another excellent resource for voters at https://www.vote411.org, which is made possible by the League of Women Voters. Particularly helpful is the personalized voting info it offers. By entering your address, you can:
If you have further questions, contact your state, territory, or local election office. Once again, usa.gov offers a quick way to get that info at https://www.usa.gov/state-election-office.
[i] https://www.newsguardtech.com/press/sad-milestone-fake-local-news-sites-now-outnumber-real-local-newspaper-sites-in-u-s/
[ii] https://www.newsguardtech.com/press/newsguard-launches-2024-election-misinformation-tracking-center-rolls-out-new-election-safety-assurance-package-for-brand-advertising/
[iii] https://www.bloomberg.com/news/newsletters/2024-05-17/ai-fake-bylines-on-news-site-raise-questions-of-credibility-for-journalists
[iv] Ibid.
[v] https://www.nature.com/articles/s41586-024-07404-1
[vi] https://www.newsguardtech.com/press/sad-milestone-fake-local-news-sites-now-outnumber-real-local-newspaper-sites-in-u-s/
[vii] https://localnewsinitiative.northwestern.edu/research/state-of-local-news/2022/report/
[viii] https://www.cybercom.mil/Media/News/Article/3895345/russian-disinformation-campaign-doppelgnger-unmasked-a-web-of-deception/
[ix] https://www.poynter.org/fact-checking/2023/chatgpt-build-fake-news-organization-website/
[x] https://www.forbes.com/sites/berlinschoolofcreativeleadership/2017/02/01/10-journalism-brands-where-you-will-find-real-facts-rather-than-alternative-facts
The post Hallucinating Headlines: The AI-Powered Rise of Fake News appeared first on McAfee Blog.
In the aftermath of a major disaster like Hurricane Helene and Milton, people come together to rebuild and recover. Unfortunately, alongside the genuine help, there are always opportunistic scammers ready to exploit the chaos for personal gain. Knowing what to look out for can help protect you and your community from falling victim to these fraudulent schemes.
The National Center for Disaster Fraud (NCDF), established by the Justice Department after Hurricane Katrina in 2005, reminds the public to be cautious of hurricane-related solicitations. As natural disasters, like Hurricane Helene, often bring out the best in people eager to help, they also provide an opportunity for criminals to exploit the situation by stealing money or personal information. Here are some of common scams and fraud to watch out for, and how you can safeguard yourself.
As residents begin to rebuild, many turn to contractors for help with repairs. Scammers often pose as legitimate contractors but lack proper licensing or qualifications. They may demand upfront payment and then disappear without completing the work or do subpar repairs.
How to Protect Yourself:
Disasters often inspire a wave of generosity, but they also give rise to fake charities. Scammers may set up fraudulent organizations that claim to be helping victims of Hurricane Helene and Milton, only to pocket the money for themselves.
How to Protect Yourself:
After a major disaster, there is often a sharp increase in demand for essential goods like water, fuel, and building supplies. Unscrupulous businesses or individuals may take advantage by charging exorbitant prices.
How to Protect Yourself:
Scammers may pose as FEMA representatives, insurance adjusters, or other government officials. They’ll claim to help expedite your relief or insurance claim in exchange for personal information or payment.
How to Protect Yourself:
Cybercriminals often send out emails or texts that look like they’re from legitimate organizations, trying to trick people into clicking on malicious links. These phishing scams can lead to identity theft or financial loss.
How to Protect Yourself:
In the wake of Hurricane Helene and Milton, the most important thing you can do is stay vigilant. While the majority of people are focused on helping and healing, there will always be a small number looking to take advantage. By recognizing the signs of common scams and taking precautionary measures, you can protect yourself and your community from further harm. If you suspect you’ve been targeted by a scam, report it to local law enforcement or the Federal Trade Commission (FTC) immediately.
The post How to Avoid Scams in the Wake of Hurricane Helene and Milton appeared first on McAfee Blog.
With the election quickly approaching, it’s essential to be informed and cautious about the growing number of voting scams. Scammers are becoming more sophisticated, using everything from artificial intelligence to fake text messages to trick people into sharing sensitive information. Here’s a breakdown of the types of voting scams that have already been seen this year and the specific steps you can take to protect yourself.
Scammers pretending to be election workers are sending fraudulent text messages to Maryland voters, falsely claiming they are not registered to vote in November. The texts urge recipients to click a fake link to “resolve” their registration status. Similar scams have been reported across the country from Sacramento, California to Marietta, Georgia.
How to protect yourself:
A new voting scam is targeting seniors in Michigan, where scammers are asking for Social Security and credit card information under the pretense of early voting opportunities. Michigan’s Secretary of State office has received numerous complaints about seniors being approached in person by imposters posing as election workers while trying to steal individuals’ identities.
How to protect yourself:
A bipartisan group of 51 attorneys general issued a warning to Life Corporation, a company accused of sending scam robocalls during the New Hampshire primary. These calls used AI to impersonate President Biden and spread false information to discourage voter participation. While this bipartisan task force is committed to tackling illegal robocalls nationwide, citizens should still be aware of the risk of deepfake audio.
How to protect yourself:
Scams tend to increase during election years, so be proactive in safeguarding against these latest fraud tactics. By following these steps, you can help protect yourself from falling victim to election-related scams. Voting is a critical part of democracy, and staying vigilant is key to both safeguarding your personal information and your right to participate.
The post Beware of These Voting Scams Happening Now appeared first on McAfee Blog.
In today’s digital world, the line between reality and deception has become increasingly blurred, with cybercriminals leveraging cutting-edge AI technologies to exploit our trust and interest in celebrities. As we continue to engage with the internet in unprecedented ways, McAfee’s 2024 Celebrity Hacker Hotlist sheds light on a growing threat—online scams using the identities of our favorite stars.
At the forefront of McAfee’s latest list is Scarlett Johansson, a renowned actress, recognized for her roles in Marvel’s Black Widow and Lost in Translation. However, this time, Johansson isn’t making headlines for a movie—she’s ranked as the U.S. celebrity whose name is most frequently used in online scams. Her likeness has been used in AI-generated deepfakes, from unauthorized ads to fake endorsements, creating a major risk for unsuspecting fans. The list doesn’t stop with Johansson. Celebrities like Kylie Jenner, Taylor Swift, and Tom Hanks also find themselves in the top 10, with hackers exploiting their images, voices, and reputations to deceive internet users. Whether it’s for fake giveaways, cryptocurrency scams, tickets to high-demand concerts, free downloads, or disinformation campaigns, these stars are unwilling participants in the cybercrime ecosystem.
McAfee’s Threat Research Labs Team compiled the Celebrity Hacker Hotlist by identifying the celebrities – including social media influencers – whose names and likenesses are most often exploited to lead consumers to online scams. This ranges from the purchase of fake goods or services that then steal your money or bank details to social media or email scams that convince consumers to click a risky link that unknowingly installs malware. All of these scams jeopardize consumers’ data, privacy, and identity.
The top ten list includes a combination of longtime talent and more recently well-known names from various fields, showcasing their potential influence on consumers of all generations:
The advent of AI has revolutionized many industries, but it’s also given cybercriminals a powerful new tool: the deepfake. In addition to phishing scams and links containing malware that exploit the popularity and reputation of celebrities and deceive their fans, these highly realistic video or audio clips can mimic the likeness of a person, making it nearly impossible to tell whether the content is real or fake. Deepfakes of celebrities are now being used to promote fraudulent products, steal personal information, and trick people into downloading malware. Imagine watching a video of your favorite star endorsing a new product, only to find out later it wasn’t them at all. This is no longer a distant possibility but a reality many fans face as scammers get better at crafting fake content. In fact, some of these AI-generated videos are so convincing that even the savviest of internet users can fall for them.
For instance, Tom Hanks’ image was manipulated to promote dubious “miracle cures,” while Taylor Swift’s likeness has been used in fake political endorsements. Johnny Depp and Kylie Jenner’s names have been used by scammers in fake cryptocurrency giveaways, luring fans to engage with risky websites or phishing scams.
While these scams primarily aim to steal money or personal data from consumers, the effects are far-reaching. For fans, the consequences can be devastating, with financial losses ranging from a few hundred dollars to over half a million. In addition to the financial risks, victims often feel violated after engaging with fraudulent content. For celebrities, these scams can have a serious impact on their public image and brand. Many stars, including Johansson, have taken a firm stand against the unauthorized use of their images in AI-generated content. As Johansson has publicly expressed, it’s not just about personal privacy but about the broader implications of AI and the need for accountability in the tech world.
As AI becomes more accessible, these scams are only expected to rise. To combat this growing issue, McAfee recently introduced a powerful combination of educational resources and advanced, AI-powered technology: McAfee Deepfake Detector, the world’s first automatic and AI-powered deepfake detector, and the McAfee Smart AI Hub, a go-to online space for the latest in AI security knowledge and news. Here are some practical tips to protect yourself from AI-generated scams:
In 2024, staying safe online means being aware of the rapidly evolving landscape of AI and cybercrime. Scammers are getting better at mimicking trusted names like Scarlett Johansson, Kylie Jenner, and Johnny Depp to deceive fans. With AI-powered tools like deepfake detectors and informed vigilance, we can reduce the risk of falling victim to these digital traps. Stay informed, stay cautious, and always think twice before clicking on a too-good-to-be-true celebrity endorsement. For more information about McAfee’s 2024 Celebrity Hacker Hotlist and ways to protect yourself, visit https://www.mcafee.ai
The study was conducted by McAfee® threat intelligence researchers to determine the number of risky sites and amount of misleading content generated by searching a celebrity name with commonly used terms. A risk score was calculated for each celebrity using a combination of McAfee WebAdvisor results and an analysis of known deepfakes recorded between January 1 to September 15, 2024. McAfee’s WebAdvisor browser extension leverages McAfee’s technology to protect users from malicious websites and, when turned on, rates nearly every internet website it finds, using red, yellow and green icons to indicate the website’s risk level and blocking access to or warning a user if they click on a malicious or risky URL link. Ratings are created by using patented advanced technology to conduct automated website tests and works with Chrome, Edge, Safari, and Firefox.
The post Scarlett Johansson Tops McAfee’s 2024 Celebrity Hacker Hotlist for AI Online Scams appeared first on McAfee Blog.
Bad news travels quickly. Or so goes the old saying. Yet we do know this: disinformation and fake news spread faster than the truth. And what makes it spread even faster is AI.
A recent study on the subject shows that fake news travels across the internet than stories that are true. Complicating matters is just how quickly and easily people can create fake news stories with AI tools.
Broadly speaking, AI-generated content has flooded the internet in the past year — an onrush of AI voice clones, AI-altered images, video AI deepfakes, and all manner of text in posts. Not to mention, entire websites are populated with AI-created content.
One set of published research shows how this glut of AI-created content has grown since AI tools started becoming publicly available in 2023. In just the first three months of 2024, one set of research suggests that the volume of deepfakes worldwide surged by 245% compared to the start of 2023. In the U.S., that figure jumped to 303%.[i]
But before we dive into the topic, we need to make an important point — not all AI-generated content is bad. Companies use AI deepfake technologies to create training videos. Studios use AI tools to dub movies into other languages and create captions. And some content creators just want to get a laugh out of Arnold Schwarzenegger singing show tunes. So, while deepfakes are on the rise, not all of them are malicious.
The problem arises when people use deepfakes and other AI tools to spread disinformation. That’s what we’ll focus on here.
First, let’s look at what deepfakes are and what disinformation really is.
First, what is a deepfake? One dictionary definition of a deepfake reads like this:
An image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.[ii]
Looking closely at that definition, three key terms stand out: “altered,” “manipulated,” and “misrepresent.”
Altered
This term relates to how AI tools work. People with little to no technical expertise can tamper with existing source materials (images, voices, video) and create clones of them.
Manipulated
This speaks to what can be done with these copies and clones. With them, people can create entirely new images, tracts of speech, and videos.
Misrepresent
Lastly, this gets to the motives of the creators. They might create a deepfake as an obvious spoof like many of the parody deepfakes that go viral. Or maliciously, they might create a deepfake of a public official spewing hate speech and try to pass it off as real.
Again, not all deepfakes are malicious. It indeed comes down to what drives the creator. Does the creator want to entertain with a gag reel or inform with a how-to video narrated by AI? That’s fine. Yet if the creator wants to besmirch a political candidate, make a person look like they’ve said or done something they haven’t, or to pump out false polling location info to skew an election, that’s malicious. They clearly want to spread disinformation.
You might see and hear these terms used interchangeably. They’re different, yet they’re closely related. And both will play a role in this election.
Disinformation is intentionally spreading misleading info.
Misinformation is unintentionally spreading misleading info (the person sharing the info thinks it’s true).
This way, you can see how disinformation spreads. A bad actor posts a deepfake with misleading info — a form of disinformation. From there, others take the misleading info at face value, and pass it along as truth — a form of misinformation.
The two work hand-in-hand by design, because bad actors have a solid grasp on how lies spread online.
Deepfakes primarily spread on social media. And disinformation there has a way of spreading quickly.
Researchers found that disinformation travels deeper and more broadly, reaches more people, and goes more viral than any other category of false info.[iii]
According to the research findings published in Science,
“We found that false news was more novel than true news, which suggests that people were more likely to share novel information … Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”
Thus, bad actors pump false info about them into social media channels and let people spread it by way of shares, retweets, and the like.
And convincing deepfakes have only made it easier for bad actors to spread disinformation.
The advent of AI tools has spawned a glut of disinformation unseen before, and for two primary reasons:
In effect, the malicious use of AI makes it easier for fakery to masquerade as reality, with chilling authenticity that’s only increasing. Moreover, it churns out fake news on a massive scope and scale that’s increasing rapidly, as we cited above.
AI tools can certainly create content quickly, but they also do the work of many. What once took sizable ranks of writers, visual designers, and content producers to create fake stories, fake images, and fake videos now gets done with AI tools. Also as mentioned above, we’re seeing entire websites that run on AI-generated content, which then spawn social media posts that point to their phony articles.
Largely we’ve talked about disinformation, fake news, and deepfakes in the context of politics and in attempts to mislead people. Yet there’s another thing about malicious deepfakes and the bad news they peddle. They’re profitable.
Bad news gets clicks, and clicks generate ad revenue. Now with AI powering increasingly high volumes of clickbait-y bad news, it’s led to what some researchers have coined the “Disinformation Economy.” This means that the creators of some deepfakes might not be politically motivated at all. They’re in it just for the money. The more people who fall for their fake stories, the more money they make as people click.
And early indications show that disinformation has broader economic effects as well.
Researchers at the Centre for Economic Policy Research (CEPR) in Europe have started exploring the impact of fake news on economic stability. In their first findings, they said, “Fake news profoundly influences economic dynamics.”[iv] Specifically they found that as fake news sows seeds of uncertainty, it reverberates through the economy, leading to increased unemployment rates and lower industrial production.
They further found bad news can lead to pessimism, particularly about the economy, which leads to people spending less and lower sales for companies — which further fuels unemployment and reductions in available jobs as companies cut back.[v]
Granted, these early findings beg more research. Yet we can say this: many people turn to social media for their news, the place where fake news and malicious deepfakes spread.
Global research from Reuters uncovered that more people primarily get their news from social media (30%) rather than from an established news site or app (22%).[vi] This marks the first time that social media has toppled direct access to news. Now, if that leads to exposure to significant portions of pessimistic fake news, it makes sense that millions of people could have their perceptions altered by it to some extent — which could translate into some form of economic impact.
As you can quickly surmise, that comes down to us. Collectively. The fewer people who like and share disinformation and malicious deepfakes, the quicker they’ll die off.
A few steps can help you do your part in curbing disinformation and malicious deepfakes …
Verify, then share.
This all starts by ensuring what you’re sharing is indeed the truth. Doubling back and doing some quick fact-checking can help you make sure that you’re passing along the truth. Once more, bad actors entirely rely on just how readily people can share and amplify content on social media. The platforms are built for it. Stop and verify the truth of the post before you share.
Come across something questionable? You can turn to one of the several fact-checking organizations and media outlets that make it their business to separate fact from fiction:
Flag falsehoods.
If you strongly suspect that something in your feed is a malicious deepfake, flag it. Social media platforms have reporting mechanisms built in, which typically include a reason for flagging the content.
Get yourself a Deepfake Detector.
Our new Deepfake Detector spots AI phonies in seconds. It works in the background as you browse — and lets you know if a video or audio clip was created with AI audio. All with 95% accuracy.
Deepfake Detector monitors audio being played through your browser to determine if the content you’re watching or listening to contains AI-generated audio. McAfee doesn’t store any of this audio or browsing history.
Further, a browser extension shows just how much audio was deepfaked, and at what point in the video that content cropped up.
McAfee Deepfake Detector is available for English language detection in select new Lenovo AI PCs, ordered on Lenovo.com and select local retailers in the U.S., UK, and Australia.
From January to July of 2024, states across the U.S. introduced or passed 151 bills that deal with malicious deepfakes and deceptive media.[vii] However, stopping their spread really comes down to us.
The people behind AI-powered fake news absolutely rely on us to pass them along. That’s how fake news takes root, and that’s how it gets an audience. Verifying that what you’re about to share is true is vital — as is flagging what you find to be untrue or questionable.
Whether you use fact-checking sites to verify what you come across online, use a tool like our Deepfake Detector, or simply take a pass on sharing something that seems questionable, they’re all ways you can stop the spread of disinformation.
[i] https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/
[ii] https://www.merriam-webster.com/dictionary/deepfake
[iii] https://science.sciencemag.org/content/359/6380/1146
[iv] https://cepr.org/voxeu/columns/buzz-bust-how-fake-news-shapes-business-cycle
[v] https://www.uni-bonn.de/en/news/134-2024
[vi] https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2023/dnr-executive-summary
[vii] Ibid.
The post Clickbait and Switch: How AI Makes Disinformation Go Viral appeared first on McAfee Blog.
In my world of middle-aged mums (mams), Instagram is by far the most popular social media platform. While many of us still have Facebook, Instagram is where it all happens: messaging, sharing, and yes, of course – shopping!! So, when one of my gal pals discovers that her Instagram account has been hacked, there is understandably a lot of panic!
Believe it or not, Facebook is still hanging onto the top spot as the most popular social media platform with just over 3 billion active monthly users, according to Statista. YouTube comes in 2nd place with 2.5 billion users. Instagram and WhatsApp tie in 3rd place with 2 billion users each. Interestingly, TikTok has 1.5 billion users and is in 4th place – but watch this space, I say!
Despite Facebook having the most monthly users, it isn’t where the personal conversations and engagement take place. That’s Instagram’s sweet spot. Instagram messaging is where links are shared and real personal interaction occurs. In fact, a new report shows that Instagram accounts are targeted more than any other online account and makeup just over a quarter of all social media hacks. So, it makes sense why hackers would expend considerable energy in trying to hack Instagram accounts. They’ll have a much greater chance of success if they use a platform where there is an appetite and trust for sharing links and personal conversations.
But why do they want to get their hands on your account? Well, they may want to steal your personal information, scam your loyal followers by impersonating you, sell your username on the black market or even demand ransoms! Hacking Instagram is big business for professional scammers!!
So, you reach for your phone early one morning to do a quick scroll on Instagram before you start the day, but you can’t seem to log on. Mmmmm. You then see some texts from friends checking whether you have in fact become a cryptocurrency expert overnight. OK – something’s off. You then notice an email from Instagram notifying you that the email linked to your account has been changed. Looks like you’ve been hacked! But please don’t spend any time stressing. The most important thing is to take action ASAP as the longer hackers have access to your account, the greater the chance they can infiltrate your life and create chaos.
The good news is that if you act quickly and strategically, you may be able to get your account back. Here is what I suggest you do – fast!:
1. Change Your Password & Check Your Account
If you are still able to log in to your account then change your password immediately. And ensure it is a password you haven’t used anywhere else. Then do a quick audit of your account and fix any changes the hacker may have made eg remove access to any device you don’t recognise, any apps you didn’t install, and delete any email addresses that aren’t yours.
Next, turn on two-factor authentication (2FA) to make it harder for the hacker to get back into your account. This will take you less than a minute and is absolutely critical. Instagram will give you the option to receive the login code either via text message or via an authentication app. I always recommend the app in case you ever lose control of your phone.
But, if you are locked out of your account then move on to step 2.
2. Locate The Email From Instagram
Every time there is a change to your account details or some new login activity, Instagram will automatically send a message to the email address linked with the account
But there’s good news here. The email from Instagram will ask you if you in fact made the changes and will provide a link to secure your account in case it wasn’t you. Click on this link!! If you can access your account this way, immediately check that the only linked email address and recovery phone number are yours and delete anything that isn’t yours. Then change your password.
But if you’ve had no luck with this step, move on to step 3.
3. Request a Log-In Link
You can also ask Instagram to email or text you a login link. On an iPhone, you just need to select ‘forgot password?’ and on your Android phone, tap ‘get help logging in’. You will need to enter the username, email address, and phone number linked to your account.
No luck? Keep going…
4. Request a Security Code
If the login link won’t get you back in, the next step is to request a security code. Simply enter the username, email address, or phone number associated with your account, then tap on “Need more help?” Select your email address or phone number, then tap “Send security code” and follow the instructions.
5. Video Selfie
If you have exhausted all of these options and you’ve had no luck then chances are you have found your way to the Instagram Support Team. If you haven’t, simply click on the link and it will take you there. Now, if your hacked account contained pictures of you then you might just be in luck! The Support Team may ask you to take a video selfie to confirm who you are and that in fact you are a real person! This process can take a few business days. If you pass the test, you’ll be sent a link to reset your password.
So, you’ve got your Instagram account back – well done! But wouldn’t it be good to avoid all that stress again? Here are my top tips to make it hard for those hackers to take control of your Insta.
1. It’s All About Passwords
I have no doubt you’ve heard this before but it’s essential, I promise! Ensuring you have a complex and unique password for your Instagram account (and all your online accounts) is THE best way of keeping the hackers at bay. And if you’re serious about this you need to get yourself a password manager that can create (and remember) crazily complex and random passwords that are beyond any human ability to create. Check out McAfee’s TrueKey – a complete no-brainer!
2. Turn on Multifactor Authentication (MFA)
Multi-factor authentication adds another layer of security to your account making it that much harder for a hacker to get in. It takes minutes to set up and is essential if you’re serious about protecting yourself. It simply involves using a code to log in, in addition to your password. You can choose to receive the code via a text message or an authenticator app – always choose the app!
3. Choose How To Receive Login Alerts
Acting fast is the name of the game here so ensure your account is set up with your best contact details, so you receive login alerts ASAP. This can be the difference between salvaging your account and not. Ensure the alerts will be sent to where you are most likely to see them first so you can take action straight away!
4. Audit Any Third-Party Apps
Third-party apps that you have connected to your account could potentially be a security risk. So, only ever give third-party apps permission to access your account when absolutely necessary. I suggest taking a few minutes to disconnect any apps you no longer require to keep your private data as secure as possible.
Believe it or not, Instagram is not just an arena for middle-aged mums! I can guarantee that your teens will be on there too. So, next time you’re sharing a family dinner, why not tell them what you’re doing to prevent yourself from getting hacked? And if you’re not convinced they are listening? Perhaps remind them just how devastating it would be to lose access to their pics and their people. I am sure that might just work.
Till next time
Stay safe online!
Alex
The post My Instagram Has Been Hacked – What Do I Do Now? appeared first on McAfee Blog.
Imagine this: you wake up one morning to find that your bank account has been emptied overnight. Someone halfway across the world has accessed your account using a password you thought was secure. Incidents like these are unfortunately becoming more common, with identity theft and fraud cases steadily increasing over the last decade.
This month is Cybersecurity Awareness Month, with the theme “Secure Our World,” which serves as a timely reminder to reassess and enhance your cybersecurity strategies against ever-evolving cyber threats. In an election year, the digital landscape becomes a breeding ground for cyber scams and malicious activities aimed at exploiting political fervor and public uncertainty. With the 2024 election on the horizon, it’s more critical than ever to strengthen our cybersecurity defenses.
By prioritizing cybersecurity awareness and implementing robust protective measures during this dedicated month, you can safeguard your personal information, protect your financial assets, and ensure the security of your digital interactions. Let’s explore five simple yet powerful ways to increase your internet security and have peace of mind in today’s digital landscape.
Passwords serve as the first line of defense against unauthorized access to your accounts but 78% of people use the same password for more than one account. Here’s how you can create and manage complex passwords:
Multifactor authentication (MFA) adds an extra layer of security by requiring two or more of the following factors to access your accounts:
Follow these steps to enable multifactor authentication:
Phishing is a common tactic used by cybercriminals to trick you into revealing sensitive information by impersonating legitimate entities, such as banks or reputable companies, to lure individuals into disclosing sensitive information like passwords or credit card numbers. These attacks often occur via email, text messages, or fake websites designed to appear authentic, exploiting human trust and curiosity to steal valuable data for malicious purposes.
Identifying Phishing Emails:
Reporting Phishing:
Software updates, also known as patches, often include security fixes to protect against known vulnerabilities. Here’s how to keep your software up to date:
Updating Operating Systems and Applications:
Social media platforms are integral parts of modern communication, but they also pose significant security risks if not managed carefully. Here are essential tips to enhance your social media security:
By implementing these straightforward yet effective cybersecurity practices, you can significantly reduce the risk of falling victim to online threats. McAfee+ can also keep you more secure and private online with 24/7 scans of the dark web to ensure your personal and financial info is safe, alerts about suspicious financial transactions and credit activity, and up to $2 million in identity theft coverage and restoration.
The post Top Tips for Cybersecurity Awareness Month appeared first on McAfee Blog.
In today’s digital world, both personal and professional environments are evolving faster than ever. As artificial intelligence (AI) becomes integral to our daily lives, it’s crucial that the devices we use stay ahead of the curve—both in terms of performance and security. According to Gartner, AI PCs are projected to total 114 million units in 2025, an increase of 165.5% from 2024. That’s why we’re excited to introduce the next generation of AI-powered PCs with our partners, designed to provide cutting-edge computing experiences with next-level AI-protection with McAfee Deepfake Detector.
These AI PCs have been built with one goal in mind: to harness the power of AI for every user. Whether you’re a content creator, business professional, gamer, or researcher, AI PCs adapt to your needs, offering enhanced processing speed, personalized optimization, and smart task management. From boosting productivity to delivering immersive entertainment, AI PCs are designed to handle it all.
We understand that in an age where digital content is omnipresent, online security must be a top priority. That’s why the following AI PCs come with McAfee Deepfake Detector preinstalled. This advanced tool is designed to protect you against the growing threat of AI-manipulated media, ensuring that you can trust the content you see online. McAfee’s Deepfake Detector uses cutting-edge algorithms to analyze AI-generated audio, distinguishing between real and manipulated content.
McAfee’s recent research shows that 27% of Americans say they may or will purchase an AI PC for themselves or a loved one during the 2024 holiday season. 40% of people aged 25-34 say the same. When asked what characteristics of an AI PC are most important to consumers:
As deepfakes become more sophisticated, this feature provides peace of mind, ensuring that you’re always one step ahead of malicious actors.
Our new AI PC range combines world-class performance with trusted security solutions. Whether you’re using these devices for work, play, or creativity, you’ll have the confidence of knowing your personal data and online experiences are safeguarded by the latest in AI-driven protection. McAfee Deepfake Detector is available on the following AI PC:
Stay tuned for more details about this exciting new range, and discover how we’re redefining the future of online protection
The post Introducing AI PCs with McAfee Deepfake Detector appeared first on McAfee Blog.
As we head into a season filled with moments that matter to consumers – from the upcoming U.S. election to the holiday shopping rush – online safety is more important than ever. With AI-generated content on the rise and scammers able to carry out more sophisticated scams, it’s crucial to stay vigilant and ensure you’re fully protected. If you’ve ever thought, “is that text message really from my bank?” Or “I don’t want my personal life to be available to people I don’t know on my social media?” McAfee+ can help you.
This autumn, McAfee has introduced a set of innovative tools designed to make online protection simpler, faster, and more effective. This includes streamlined experiences that make it easier and faster to be protected from the start, as well as enhancements that reinforce privacy protection across social media platforms, protect against the latest smishing texts in real time, and provide control over performance impact of malware scans. Whether it’s staying safe during the rush of holiday shopping or navigating potential misinformation leading up to the elections, McAfee has you covered with the latest online protection.
During the busy autumn season, time is of the essence. With more people shopping online and receiving an influx of emails and text messages, the last thing you need is complicated, time-consuming setup processes. McAfee’s latest update is all about making protection simpler and more accessible.
The newly streamlined setup ensures you’re fully protected in fewer steps, whether you’re setting up Windows or mobile. And by integrating experiences that were initially cloud-based directly in Windows and mobile apps, consumers can seamlessly manage their online privacy and social media settings directly from their devices.
With the upcoming elections and family gatherings on the horizon, many of us may be sharing more on social media than usual. But how much is too much? With McAfee’s Social Privacy Manager, people get personalized privacy settings based on their sharing preferences – now with industry-first support for TikTok – in addition to platforms like Facebook, Instagram, and LinkedIn. In an era where online privacy concerns are skyrocketing, and 9 out of 10 social media users are concerned about protecting their online privacy and identity, McAfee continues to stand at the forefront of online security.
Whether you’re prepping for holiday photos or protecting your kids’ privacy on TikTok and YouTube, Social Privacy Manager empowers you to adjust over 100 privacy settings across seven social platforms – Facebook, Instagram, X, LinkedIn, YouTube, Google and TikTok – ensuring your information stays private with just a few clicks.
By adding TikTok support Social Privacy Manager also covers the top two platforms that teens use1, TikTok and YouTube. With a family plan, parents can now easily help set privacy settings for their kids – and with 43% of people feeling that online privacy risks have increased in 2024, McAfee’s focus on providing control over social media privacy is both timely and essential.
The McAfee Social Privacy Dashboard
Heading into the holiday season, consumers often face an uptick in phishing and smishing scams, as fraudsters take advantage of shopping rushes and delivery notifications to deceive people. More than a third (39%) of people who use mobile phones admit they have clicked on a text scam message such as a suspicious text from an unknown number or a fake package delivery text, and nearly half (44%) state that they or someone they know have been a victim of such a text scam.
In response to rising phishing and text scam threats, McAfee has upgraded its AI-powered Text Scam Detector. When a text message arrives that contains a link to a website, that link will be scanned and analyzed by McAfee Smart AITM in real-time. If the link leads to a malicious or phishing website, the text message will be blocked.
On iPhones, scam texts are automatically filtered into a junk folder, and on Android, you’ll receive instant alerts when a suspicious message arrives, helping you avoid costly mistakes when you’re busiest.
Text Scam Detector as part of McAfee Mobile Security
“Antivirus protection slows down my PC” is something that is often heard, however recent research from AV-Comparatives shows that this is not the case; in fact, McAfee provides protection with the least amount of performance impact on PCs, of all tested vendors.
To ensure people do not even have to worry about their computer slowing down during holiday shopping or while working through election news, McAfee’s Antivirus now offers a ‘Fast Scanning’ feature. This allows people to balance performance and security, offering customizable options for quick scans or deeper system checks without compromising PC speed.
The Antivirus Dashboard
Whether you’re working remotely, traveling for the holidays, or accessing election news online, privacy is essential – and while a VPN service is sometimes seen as a double-edged sword, providing privacy when people are connected to the internet but impacting the speed of that connection, McAfee’s Secure VPN now offers even faster, more stable connections with an expanded network of 7,000 servers in 48 countries. Additionally, consumers can enjoy extended WireGuard protocol support on Android, Windows, and iOS, for online privacy protection across devices no matter where you are.
VPN Settings
From safeguarding social media privacy to blocking scam texts and ensuring secure browsing, McAfee+ is designed to help you stay safe in an increasingly complex digital world. McAfee+ plans are available for both individuals and families – and with protections such as McAfee’s Social Privacy Manager and McAfee’s Text Scam Detector included, consumers can rest easy knowing that McAfee is constantly watching out for their online protection.
In today’s digital age, securing your online identity and privacy has never been more critical. McAfee’s latest product enhancements reflect the company’s commitment to delivering advanced, easy-to-use solutions that help consumers stay safe online. Whether you’re looking for protection from phishing and smishing scams, safeguarding privacy on social media, or malware, our expanded product range offers solutions for all consumers.
For more information on McAfee’s latest products and plans, visit McAfee.com.
The post How to Maximize the Latest McAfee+ Enhancements for Peace of Mind This Autumn appeared first on McAfee Blog.
Elections are the bedrock of democratic societies, but historically, they have been vulnerable to various forms of manipulation and fraud. Over the last decade, there have only been 1,465 proven cases of election fraud out of the hundreds of millions of votes cast, but election interference through tactics like deliberately spreading disinformation has become increasingly more common.
Election Day for determining the next U.S. President isn’t until November 5th, but early voting starts as early as September 6th in some states. With election season officially underway, understanding past election scams and current threats is crucial for safeguarding the future of democratic processes. As technology and political landscapes evolve, so do the methods used to undermine electoral integrity. Let’s examine the impact of historical election scams, how cybersecurity measures have advanced in response, and the current landscape of election cybersecurity threats.
Throughout history, election scams have come in many forms, from ballot stuffing to voter intimidation. One of the most notorious examples is the 1960 Kennedy-Nixon U.S. presidential election, which was so close that both Republicans and Democrats accused the other side of stuffing ballot boxes. Nixon later claimed in his autobiography that widespread fraud had happened in Illinois, which Kennedy won by less than 10,000 votes.
In more recent history, the 2016 U.S. presidential election highlighted a new dimension of electoral interference: cyber manipulation and disinformation. Russian operatives used social media to spread divisive content and hacked into the email accounts of political figures to release sensitive information. This year, Iranian hackers successfully breached the Trump campaign and targeted the Harris campaign as well.
Hacking is not limited to U.S. elections. In the 2017 French presidential election, hackers targeted the campaign of Emmanuel Macron, leaking internal documents and emails. While the impact of this breach was mitigated by the swift response of the Macron campaign and French authorities, it highlighted the vulnerability of political campaigns to cyberattacks and the importance of rapid countermeasures.
In response to these emerging threats, cybersecurity measures have evolved substantially. In the wake of the 2016 election interference, there was a heightened awareness of the vulnerabilities in electoral systems. This led to the development and implementation of more robust cybersecurity protocols aimed at protecting the integrity of elections.
As technology continues to advance, so do the tactics used by malicious actors. The current landscape of election cybersecurity threats includes:
To effectively address these threats, it is essential for both voters and election officials to be informed and proactive. Voters should be educated about the signs of misinformation and the importance of verifying information from credible sources. Election officials should stay informed about the latest cybersecurity practices and potential threats and adhere to best practices for cybersecurity, including regular updates, strong access controls, and encryption. Transparent communication with the public about the steps being taken to secure elections can build trust and counteract disinformation efforts.
Understanding past election scams and current cybersecurity threats is vital for protecting the integrity of democratic processes. By learning from historical incidents and staying vigilant against emerging threats, we can strengthen our electoral systems and ensure that future elections are fair, transparent, and secure. Through ongoing advancements in technology and policy, we can address the challenges of today and safeguard the future of democracy.
The post Past Election Scams: Lessons Learned and Current Threats appeared first on McAfee Blog.
In a recent special hosted by Oprah Winfrey titled “AI and the Future of Us”, some of the biggest names in technology and law enforcement discussed artificial intelligence (AI) and its wide-ranging effects on society. The conversation included insights from OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI Director Christopher Wray. These experts explored both the promises and potential pitfalls of this rapidly advancing technology. As AI continues to shape our world, it’s crucial to understand its complexities—especially for those unfamiliar with the nuances of AI technology.One of the most significant concerns raised in the special was the rise of AI-generated content, specifically deepfakes, and how they are being weaponized for disinformation. Deepfakes, alongside other generative AI advancements, are progressing at a pace that outstrips our capacity to manage them effectively, posing new challenges to the public.
A deepfake is a highly realistic piece of synthetic media, often video or audio, that uses AI to swap faces or voices to create fake, yet believable, content. Brownlee demonstrated how rapidly this technology is evolving by comparing two pieces of AI-generated footage. The newer sample, powered by OpenAI’s Sora, was far more convincing than its predecessor from just months earlier. While seasoned observers might spot the odd flaw, most people could easily mistake these fakes for real footage, especially as the technology improves.
A demonstration by tech expert Marques Brownlee revealed how AI-generated content has reached unprecedented levels of realism, making it difficult to distinguish between what’s real and what’s fake. This development raises serious concerns about misinformation, particularly in the context of deepfake technology, where AI can create highly realistic, yet entirely fabricated, videos and audio.
The ability of AI to generate convincingly fake content isn’t just a novelty—it’s a threat, particularly when used for malicious purposes. FBI Director Christopher Wray highlighted a chilling example of his introduction to deepfake technology. At an internal meeting, his team presented a fabricated video of him speaking words he never said. It was a stark reminder of how AI could be used to manipulate public opinion, create false narratives, and tarnish reputations. McAfee created Deepfake Detector as a defense against malicious and misleading deepfakes. McAfee Threat Labs data have found 3 seconds of your voice is all scammers and cybercriminals need to create a deepfake.
Wray discussed the increasing use of deepfakes in *sextortion*—a disturbing crime where predators manipulate images of children and teens using AI to blackmail them into sending explicit content. The misuse of AI doesn’t end there, though. In a world where misinformation and disinformation are rampant, deepfakes have become a powerful tool for deception, influencing everything from personal relationships to politics.
The upcoming U.S. presidential election is one area where deepfakes could have particularly dire consequences. Wray pointed out that foreign adversaries are already using AI to interfere with American democracy. Posing as ordinary citizens, these bad actors use fake social media accounts to spread misleading AI-generated content, adding to the chaos of political discourse. In fact, AI-generated images of high-profile figures like former President Donald Trump and Vice President Kamala Harris have already misled millions of people.
Bill Gates emphasized that AI’s progression is moving faster than many anticipated, even for experts in the field. This rapid evolution could lead to major societal shifts sooner than expected, presenting both exciting opportunities and significant challenges. Sam Altman of OpenAI echoed these concerns, stressing that the world is only beginning to see the full scope of AI’s potential impact on the economy and everyday life.
One of the more controversial points discussed was AI’s potential to displace jobs. Gates predicted that in the future, the workweek might shrink as automation takes over many tasks, suggesting a shift to a three-day workweek. While automation may replace many roles, Gates argued that human-centric professions—those requiring creativity and interpersonal skills—will remain in demand. This highlights the growing need for skills that machines can’t replicate.
Christopher Wray, Director of the FBI, warned of how AI is being weaponized by criminals. From manipulating innocent images into explicit content to using AI for extortion, the technology is being leveraged to amplify illegal activities. Wray illustrated how AI has made it easier for less experienced criminals to engage in more sophisticated crimes, particularly in targeting vulnerable populations like teenagers.
The overarching message from the discussion was clear: to mitigate the risks posed by AI, close collaboration between governments and technology companies is crucial. Altman stressed the importance of implementing safety measures, likening the regulation of AI to that of airplanes and pharmaceuticals. Gates echoed the call for responsible development, emphasizing that regulatory frameworks must evolve alongside the technology.
AI is advancing rapidly, changing the way we live, work, and communicate. For those unfamiliar with the intricacies of generative AI, the recent discussion on AI and the Future of Us” provided a comprehensive look at both the opportunities and dangers AI presents. From job market disruptions to the rise of deepfakes and disinformation, it’s clear that AI will continue to shape our world in unpredictable ways. By acknowledging both its promise and its peril, we can better prepare ourselves for the future of AI.
Despite the concerns raised, the conversation was not without optimism. AI holds immense potential to revolutionize sectors like healthcare and education. However, the discussion made it clear that thoughtful regulation and public awareness are necessary to ensure AI serves society positively and ethically. By balancing innovation with caution, there’s hope that AI can be harnessed to benefit everyone.
The post Unmasking AI and the Future of Us: Five Takeaways from the Oprah TV Special appeared first on McAfee Blog.
With less than 60 days left until Election Day, the digital landscape has become a battleground not just for votes but for your personal security. With political ads, fake voter registration sites, and disinformation campaigns cropping up everywhere, it’s essential to stay vigilant against common election scams and election manipulation schemes. Here’s how you can navigate this crucial time safely.
Before diving into specific scams, it’s important to differentiate between misinformation and disinformation. Misinformation refers to false or misleading information shared without malicious intent, often due to ignorance or misunderstanding. Disinformation, on the other hand, is deliberately false or misleading information spread with the intent to deceive, manipulate, or sway public opinion.
Knowing the difference is crucial because it influences how you approach and verify the information you encounter. Disinformation campaigns are often more sophisticated and can be more challenging to detect, making it essential to keep a healthy dose of skepticism while navigating this election season.
One prevalent scam during election season is fake voter registration websites. These sites may look official but are designed to steal your personal information. They often appear as pop-ups or ads on social media and search engines.
To protect yourself:
When you’re excited about a political candidate, it’s natural to want to support their campaign by sending them a donation. Scammers prey on that excitement by creating fake donation websites to try to take money from unsuspecting individuals. TikTok banned requests for political donations on their platform because of the prevalence of these types of scams.
To avoid sending money to scammers:
Political ads are ubiquitous during election season, with political ad spending projected to be $10.2 billion in 2024. But not all political ads are created equal. Misleading or false ads can be crafted to manipulate voters by presenting distorted facts or outright lies.
To discern the truth:
Social media is a double-edged sword during elections. While it offers a platform for legitimate discourse, it’s also a breeding ground for disinformation. Social media amplifies both credible information and disinformation due to its algorithms prioritizing engagement over accuracy, making sensational or misleading content more likely to be seen and shared. The anonymity and ease of content creation on these platforms enable the rapid spread of false narratives, which can be difficult to counteract amidst the sheer volume of information circulating.
You might encounter false content designed to manipulate voter perceptions. To navigate this:
Advances in artificial intelligence (AI) have led to easily created realistic deepfakes—manipulated videos or images that can spread false narratives. Earlier this year, a fake robocall using AI voice-cloning technologies tried to influence voters in the New Hampshire primary.
Our mission is to help you navigate these challenges effectively. For decades, McAfee has stood as a reliable source of information and guidance. This election season, we are helping to discern what is real versus what is fake through our new Deepfake Detector, the world’s first automatic and AI-powered deepfake detector. Trained on close to 200,000 samples and counting, Deepfake Detector can identify and alert consumers within seconds of AI-altered audio being detected in videos.
To detect deepfakes on your own:
By understanding the types of scams and misinformation that proliferate during election season and implementing these practical tips, you can confidently and securely engage in the democratic process. Protecting your personal information and making informed decisions is not just about securing your vote—it’s about safeguarding the integrity of your digital presence and ensuring that your voice is heard clearly and accurately.
The post How to Avoid Common Election Scams appeared first on McAfee Blog.
You didn’t get the job. Worse yet, you got scammed. Because the opening was never real in the first place. It was a job scam, through and through.
We’ve covered job scams for some time here in our blogs. And as it is with many other sorts of scams, AI tools have made it easier for scammers to pull them off.
It looks something like this:
And the number of these attacks? They’re on the rise.
In the Federal Trade Commission’s (FTC) report earlier this year, it called out $491 million in reported losses due to job scams in 2023. Compared to the $367 million reported the year prior, that marked more than a 25% increase in losses. Overall, the median loss per victim was just above $2,000 each.
This aligns with further figures from the Identity Theft Resource Center (ITRC), which also saw a bump in online job scams. Comparing 2023 with 2022, the ITRC reported a 118% jump in reported scams.
As with all such figures, these only capture reported cases of job scams. Not everyone files a complaint with the FTC, law enforcement, or other agencies. Those figures are thus likely higher.
Social media platforms have several mechanisms in place to identify and delete the phony profiles that scammers use for these attacks. In 2023, LinkedIn reported the removal of 86.8 million fake accounts over the year.[i] More than 90% were caught at registration and the remainder were caught through manual investigations. Overall, 99.6% of fake accounts were eliminated before a LinkedIn member reported it.
Likewise, Facebook has its own measures in place. Across 2023, they removed more than 2.6 billion fake accounts.[ii] Automated and other internal safeguards caught roughly 99% before users reported them. As for their latest figures, Facebook says it caught 99.7% of fake accounts before users reported them.
However, other platforms prove problematic. That’s simply due to their nature. As such, many job scam offers come by way of a Telegram message. Here, “recruiters” have a particularly enticing offer, yet say that they only communicate over Telegram. With that, job seekers have no real way of knowing who’s truly on the other end of the conversation.
Needless to say, that’s much the same problem people have with job scams that find them via text.
With that, scammers still find their way through carefully established defenses. And others stick to platforms and technologies that provide them with cover. For them, it’s a numbers game. They create high volumes of scam profiles, posts, and messages — now made easier with AI tools — and reel in their victims who fall for their lures. As the FTC’s data shows, just a handful of victims can reap thousands in return.
The people behind job scams want the same old things. They want your money, and they want your personal info for identity theft. In some cases, they want you to launder money or pass along bad checks, all under the guise of signing up for onboard training and materials.
Those are just a few of the signs. Here are several other red flags to look for:
They ask for your Social Security or tax ID number.
In the hands of a scammer, your SSN or tax ID is the primary key to your identity. With it, they can open up bank cards, and lines of credit, apply for insurance benefits, collect benefits and tax returns, or even commit crimes, all in your name. Needless to say, scammers will ask for it, perhaps under the guise of a background check or for payroll purposes. The only time you should provide your SSN or tax ID is when you know that you have accepted a legitimate job with a legitimate company, and through a secure document signing service, never via email, text, or over the phone.
They want your banking information.
Another trick scammers rely on is asking for bank account information so that they can wire a payment to you. As with the SSN above, closely guard this info and treat it in the same way. Don’t give it out unless you actually have a legitimate job with a legitimate company.
They want you to pay before you get paid.
Some scammers will take a different route. They’ll promise employment, but first, you’ll need to pay them for training, onboarding, or equipment before you can start work. Legitimate companies won’t make these kinds of requests.
Aside from the types of info they ask for, the way they ask for your info offers other clues that you might be mixed up in a scam. Look out for the following as well:
You can sniff out many online scams with the “too good to be true” test. High pay, low hours, and even offers of things like a laptop and other perks might be the signs of a scam. When pressed for details, some scammers offer an answer full of holes or no reply at all.
Job scammers hide behind their screens. They use the anonymity of the internet to their advantage, so they won’t agree to a video chat or call, which are common nowadays. That’s a possible sign. Yet AI tools have changed the game here somewhat. Sophisticated scammers can create real-time deepfakes that overlay faces and voices over a scammer’s face and voice in video calls.
Scammers love to keep their scams moving along at a good clip. They want to cash in quickly and move on to their next victim. Pay close attention if the recruiter starts asking for personal info almost right away. Or if they start asking for money or any dealings with money. It might be a scam.
Do a little background check. Any time an employer or recruiter comes along, check out their company or employment agency online. It’s just the same as you would if you were prepping for an interview. Look at their history, what they do, how long they’ve been doing it, and where they have locations. Online reviews can help, as can a quick search online with the company’s name followed by “scam.”
You can also dig a little deeper than that.
In the U.S., the Better Business Bureau (BBB) offers a searchable listing of businesses. That includes a brief profile, a rating, and even a list of complaints (and company responses) waged against them. Spending some time here can help sniff out trouble.
Internationally, you can turn to organizations like S&P Global Ratings and the Dun and Bradstreet Corporation. They can provide detailed background info, yet they might require signing up for an account.
Yet be on the lookout for imposters. Many job scammers will pose as recruiters at legitimate companies. They’ll use the logos and digital letterhead of real organizations and generally do what they can to convince you that they, and their offer, are real.
In these cases, look for the warning signs mentioned above. Follow up by visiting the website of the company in question. See if the job is listed there. Also, see if the contact info on the site matches up with the contact info the “recruiter” used to reach you. If they differ, you’re likely looking at a scam.
Given the way we rely so heavily on the internet to get things done and simply enjoy our day, comprehensive online protection software that looks out for your identity, privacy, and devices is a must. Specific to job scams, it can help you in several ways, these being just a few:
[i] https://about.linkedin.com/transparency/community-report#fake-accounts-2023-jul-dec
[ii] https://transparency.meta.com/reports/community-standards-enforcement/fake-accounts/facebook/#content-actioned
The post AI Enters the Mix as Online Job Scams Continue to Rise appeared first on McAfee Blog.
There used to be a saying that ‘nothing is certain except death and taxes’. Well, I now think it needs to be amended – and ‘data breaches’ needs to be added on the end! Regardless of where you live, not a month goes by without details of yet another data breach hitting the news headlines. This year has seen some of the biggest, most damaging breaches in recent history. According to the US Identity Theft Resource Centre, over 1 billion people were impacted by data breaches in the first 6 months of 2024. Up to 560 million people worldwide were affected by the Ticketmaster data breach, 30 million in the Ticketek breach and all AT&T’s cell customers had call and text records exposed in a massive breach. And that’s just a few quick examples.
A data breach happens when there is unauthorised access to sensitive, private, or confidential information. This could include account details, purchase histories, customer identities, payment methods, or confidential private data, for example, medical records.
There are a few different ways that a data breach can happen. Firstly, hackers may exploit weaknesses in systems, networks, applications, or even physical security to gain unauthorized access to sensitive information. These hackers may be acting alone or be part of a larger ring. Secondly, it could happen by a ‘malicious insider’ – a disgruntled or recently sacked employee who wants revenge by hurting the company or, an employee who wants to profit off the company’s data by selling it online. And lastly, it can happen accidentally – when an email containing sensitive data ends up in the wrong hands, a laptop with sensitive data gets stolen or even a USB drive with confidential data is lost.
It’s hard to really know whether there has actually been an increase in data breaches or if the new reporting laws mean we are now aware of new breaches. For years, data breaches have likely been occurring without our knowledge. In Australia, there has been a consistent rate of data breaches since 2020 – about 450 every 6 months. And while this is higher than when the mandatory reporting laws were brought in in 2018, this could be explained by an increased vigilance by the companies themselves.
Over the last 2 years in Australia, we have had some significant data breaches that have affected more than 10 million Aussies each time. In 2022, the Optus and Medibank breaches each affected around 10 million Aussies, in 2023 the Latitude Financial breach affected 14 million consumers and the recent Medisecure breach in May 2024 affected close to 15 million customers. And who can forget the Canva data breach in 2019 that affected 139 million customers worldwide? And that’s only the large ones! It’s now widely accepted that most Aussies would have been affected by a data breach with some affected on multiple occasions.
So, I believe the time has come when we need to accept that data breaches are part of modern, digital life and redirect the energy we could use worrying into protecting ourselves so that the fallout will be minimal. Here are three areas where I suggest you spend some energy.
Ensuring you have a unique, long, and complex password for each of your online accounts is the ABSOLUTE best way of protecting yourself in case of a data breach. Let me explain. It’s pretty common for hackers to steal customer’s personal data as part of a data breach and this will include login credentials. Hackers will then use bots to test the stolen email and password combination to see where else they could possibly get entry. So, if you’ve used the same password elsewhere then you could be in for a world of pain.
But let’s keep it real. Many of us don’t have a separate password for every online account. It takes a lot of work to reorganise your digital life. Most folks have a handful of passwords they use on rotation. But as you can see, this isn’t ideal.
And remember, if you find out a company you have an account with was hacked, change your password immediately. And of course, if you have used that password, or even something similar, on any other accounts then you’ll need to change it too.
The best way to get on top of this whole situation is to invest in a password manager like McAfee’s free software TrueKey that can both generate and remember super complex passwords. With many people having 100+ online accounts, you would need to have to be a member of Mensa to remember all those passwords on your own. A password manager takes all the stress away.
If someone has managed to get their hands on your email/password combination but you have multi-factor authentication in place then you will be protected as it will stop any unauthorised access to your account. How good!! So, if any platform or company that you have an account with offers it then PLEASE action it.
Now, there are two main types of two-factor authentication: one that sends a code via text message, and another that uses an authentication app, typically installed on a mobile device. Since phone numbers can be hijacked and text messages intercepted, I always recommend using an authentication app for added security.
Believe it or not, a company’s security breach may not be the reason that your data is stolen. All it can take is a small slip-up – and remember we are all human! Here’s what you need to do to be vigilant:
Staying up to date with the news and abreast of data breaches is a great way to stay vigilant. Services like Have I Been Pwned allows anyone to check if their email addresses or phone numbers have been involved in a data breach. Simply enter your email address on their site, and they will provide a list of breaches in which your information was compromised. Firefox also offers data breach alerts, while Apple lets you check for leaked passwords stored in iCloud.
You can also subscribe to credit monitoring services which will alert you to any major changes in your credit report that could indicate identity theft or fraud.
I also recommend taking the time to check your bank and credit card account statements for anything unusual or unauthorised. And always report anything suspicious to your bank ASAP.
I also recommend that you rethink everything you share online. Remember, anything you share online could resurface in a breach and that includes private messages, photos, and social media posts. If you do need to upload sensitive files to the cloud for storage such as a picture of your birth certificate or passport, why not encrypt the image first so that no one else can retrieve it?
Encrypted messaging services are also a great idea if you are concerned about your privacy. I’m a big fan of Signal but WhatsApp and Telegram are also good options.
So, the bad news my friends, is that data breaches are inevitable unless you are planning on dropping out of society and living off the grid – tempting, I know! But the good news is that there are steps you can take to ‘future-proof’ yourself for that moment when you will be affected. So, rethink your password strategy, turn on 2-factor authentication, limit what you share, and you’ll make it hard for cyber criminals to get entrenched in your digital life.
Till next time
Stay safe online
Alex
The post How To Minimise the Fallout From a Data Breach appeared first on McAfee Blog.
As technology rapidly advances, the boundaries of what’s possible in personal computing are continuously expanding. One of the most exciting innovations on the horizon is the concept of the AI PC, which stands for Artificial Intelligence Personal Computer. AI PCs accounted for 14% of all personal computers shipped in the second quarter of 2024, with demand expected to continue to grow.
These intelligent machines are set to transform the way we interact with our computers, offering unprecedented performance and personalization. Let’s delve into what an AI PC is, explore the benefits it offers consumers, and understand how it is reshaping the future of computing.
An AI PC is a computing device that integrates artificial intelligence capabilities directly into its hardware and software. Unlike traditional PCs, which rely on external software or cloud services for AI functionalities, AI PCs have built-in AI processors or coprocessors that enable them to perform intelligent tasks locally.
These machines leverage advanced AI algorithms to enhance various aspects of computing, from performance and efficiency to user experience and security. They have a neural processing unit (NPU), “a type of processor designed to handle the mathematical computations specific to machine learning algorithms.” NPU speed is now measured by “trillions of operations per second” (TOPS).
By embedding AI capabilities into the core of the PC, these devices can offer a more responsive, personalized, and secure computing environment. Here’s how they are transforming personal computing:
One of the standout features of AI PCs is their ability to automate and optimize tasks intelligently. AI PCs can learn from user behavior and system performance to streamline processes and improve efficiency. For example, AI can manage system resources dynamically, prioritizing tasks based on current needs and usage patterns. This means that applications requiring high performance, such as gaming or video editing, can run more smoothly without manual intervention.
AI algorithms can also predict and pre-load applications and files that users are likely to access next, reducing load times and improving overall responsiveness. This level of automation and optimization ensures that users experience a seamless and efficient computing environment.
Data-intensive applications, such as those used for machine learning, scientific research, and complex simulations, benefit greatly from the power of AI PCs. These machines are equipped with specialized AI processors designed to handle large volumes of data quickly and efficiently. By offloading specific tasks to these AI processors, the main CPU is freed up to handle other operations, resulting in faster processing speeds and reduced latency.
For professionals and researchers working with big data or computationally heavy applications, AI PCs can drastically cut down processing times and enhance productivity. The integration of AI ensures that these applications can perform complex calculations and analyses with greater accuracy and speed.
AI PCs excel in delivering personalized user experiences by learning and adapting to individual preferences and behaviors. Through continuous learning, AI systems can customize the operating environment based on how users interact with their PCs. This can include adjusting system settings, recommending software or files, and even optimizing user interfaces to align with personal habits and preferences.
For example, an AI PC might analyze your work patterns and suggest tools or shortcuts that enhance productivity. It can also personalize your entertainment experience by recommending media content based on your viewing history and preferences. This level of personalization creates a more intuitive and enjoyable user experience.
Cybersecurity has become a constant underlying threat in the digital age. Last year, 880,418 Americans reported cybercrime to the FBI’s Internet Crime Complaint Center, which was a 10% increase from 2022.
AI PCs are addressing this issue with advanced threat detection and mitigation capabilities. AI-driven security systems can analyze patterns and behaviors to identify potential threats such as malware, phishing attempts, or unauthorized access. AI-driven security systems use machine learning algorithms to detect threats in real-time. This proactive approach enhances the protection of sensitive data and ensures a safer computing environment.
AI PCs are not just about high-performance computing and security; they also excel in assisting with everyday personal tasks. For instance, AI-powered virtual assistants integrated into the PC can help manage schedules, set reminders, and perform routine tasks such as composing emails or creating documents.
These virtual assistants learn from user interactions to offer more accurate and contextually relevant assistance. They can also automate repetitive tasks, such as file organization or data entry, saving users time and effort. By handling mundane activities, AI PCs allow consumers to focus on more complex and creative tasks.
The integration of AI into personal computing is a glimpse into the future of technology. As AI PCs become more advanced, we can expect even greater enhancements in performance, efficiency, and user experience. These devices are not just about adding new features; they represent a fundamental shift in how we interact with technology, making computing more intuitive, personalized, and secure.
As we move forward, keeping an eye on these advancements will be crucial in harnessing their full potential and embracing the next era of personal computing. The future of AI PCs is here, and it’s poised to redefine how we interact with our digital world.
The post What is an AI PC? appeared first on McAfee Blog.
Tom Hanks, one of the most recognizable faces in the world, warns that scammers have swiped his likeness in malicious AI deepfakes.
As reported by NBC News, Actor Tom Hanks issued an announcement to his followers saying his name, likeness, and voice have shown up in deepfaked ads that promote “miracle cures” without his consent. The actor posted on Instagram:
In the ever-evolving landscape of digital advertising, a new challenge has emerged that blurs the lines between reality and artificial fabrication: AI-generated content using celebrity likenesses.
Tom Hanks isn’t the only victim. Earlier in 2024, we saw a malicious AI deepfake of Taylor Swift front a phishing scam with a free cookware offer. In 2023, the deepfaked likeness of Kelly Clarkson pushed weight loss gummies. And, just a few weeks ago, malicious deepfakes of Prince William endorsed a bogus investment platform. We’ve also seen deepfakes of noteworthy researchers hawking miracle cures as well, which we’ll soon cover in another blog post.
Without question, we live in a time where scammers can turn practically anyone into a deepfake. The AI tools used to create them have only gotten better, more accessible, and easier to use. Compounding that concern is just how convincing these bogus endorsements look and sound.
Malicious deepfakes affect more than the celebrities they mimic. They affect everyone who goes online. As we’ve seen with Tom Hanks, while deepfakes can potentially tarnish his reputation, they can also harm the general public. By pushing disinformation and frauds, deepfakes open the door to health risks, identity theft, and in an election year, voter suppression — as we saw with the Joe Biden AI voice clone robocalls in Vermont.
Celebrities like Scarlett Johansson have begun to fight back legally against the unauthorized use of their likenesses. However, the legal framework in the U.S. remains largely unprepared for the challenges posed by AI-generated content. Yet we’re seeing some progress, at least on a state level in the U.S.
Tennessee recently issued a piece of legislation that says state residents have a property right to their own likeness and voice. In effect, Tennesseans can take legal action if another person or group creates deepfakes in their likeness. Illinois and South Carolina have similar legislation under consideration.
Those represent just a handful of 151 state-level bills that have been introduced or passed through July of this year — all covering AI deepfakes and deceptive media online.[i] Likewise, we’ll take a closer look at how legislation is catching up with AI in an upcoming blog.
As we’re quick to point out in our blogs, not all AI deepfakes are bad. AI deepfake tools have plenty of positive uses, such as dubbing and subtitling movies, creating training and “how-to” videos, and even creating harmless and humorous parody videos — all well within the scope of the law.
The problem is with malicious deepfakes, like the ones Tom Hanks warned us about. Yet how can you spot them?
Technology has kept pace, as it has with our newly released Deepfake Detector. It alerts you in seconds if it spots AI-manipulated content. Right in your browser. It works like this:
Deepfake Detector monitors audio being played through your browser while you browse. If it determines what you’re watching or listening to contains AI-generated audio, it alerts you right away.
McAfee doesn’t store any of this audio or browsing history. What you watch is yours, and you get to keep that private.
It works in the background while you browse. So, if a deepfake Tom Hanks or Taylor Swift video crops up in your feed, you’ll know with a high degree of confidence that it’s a fake. You can easily snooze notifications or turn off scanning right from your dashboard.
Deepfake Detector shows how much is real and how much is fake. With a browser extension, Deepfake Detector shows what portion of audio was deepfaked, and at what point in the video that content cropped up. Think of it working like a lie detector in the movies. As the video plays, peaks of red lines and troughs of gray lines show you what’s likely a fake and what’s likely real.
As AI-detection technology continues to advance, the responsibility also falls on us, collectively, to keep an eye out for fakes. Especially the glut of malicious deepfakes we now face.
The key to navigating this new era of AI is awareness. Indeed, tools will help us spot deepfakes. Yet we can count on ourselves to spot them too.
First off, we need to realize just how easy it is to create a deepfake. Keeping that in mind keeps us on guard. Next, when we see that celebrity gushing about a miracle cure or another promoting a screaming great deal, we know to stop and think before we act.
From there, we have plenty of excellent and reputable fact-checking resources that can help us get to the truth. Snopes, Reuters, Politifact, the Associated Press, and FactCheck.org all offer great ways to find out if what we’re seeing and hearing is true, false, or somewhere in between.
And with this kind of awareness in mind, we’ve launched the McAfee Smart AI Hub. We see the rise of malicious deepfakes as a major concern. It’s a security concern. An identity theft concern. A health concern. An election concern. And a family concern as well. We created the hub with these in mind and established it as a place where you can learn about the latest AI threats. Additionally, it’s a place where you can join the fight against malicious deepfakes by turning in the ones you find online.
While the advent of AI brings remarkable benefits, it also introduces complex challenges. As we move forward, balancing innovation with ethical considerations and consumer protection will be paramount. Without a doubt, we’ll continue to follow it all closely here in our blogs.
As for the Tom Hanks deepfakes, if something seems too good to be true, like miracle advice, it probably is. Stay curious, stay cautious.
[i] https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-ai-elections-within-limits
The post Tom Hanks Warns Fans: The Dark Side of AI Scams appeared first on McAfee Blog.
As the Gallagher brothers reunite for the first live Oasis shows in 16 years, scammers have queued up phony ticket schemes to cash in.
With that, we’re advising fans to take extra care as they dash to buy seats for these long-awaited shows. McAfee Labs researchers have discovered over 2,000 suspicious tickets for the 2025 reunion tour on sale online, with prices ranging from £700-£1,845. McAfee is urging fans to be careful when purchasing tickets this weekend.
In the example below, the following offers appeared on a third-party reseller site several days before the opening sale of official tickets on August 31st.
Screenshot of apparent bogus offers for Oasis tickets.
The seller clearly had no seats, as tickets simply weren’t available to the public nor pre-release at that time.
Official tickets for the 2025 tour go on sale on August 31st at 9am in the UK and 8am in Ireland, and only through official ticket agents. So if you’re after tickets, head directly to the official Oasis site at https://oasisinet.com.
Official tickets available at oasisnet.com
Concert organizers have made two additional things clear. First, each household has a four-ticket limit per show. Second, any ticket resales must go at face value plus a booking fee.
Of benefit to fans, purchases made through official ticket agents have policies and refunds that protect buyers in the event of cancellations. Additionally, fans who buy tickets with a credit card might also find themselves further protected by Section 75 of the Consumer Credit Act. Keeping these things in mind can help you from getting snared by a scam.
To get genuine Oasis tickets, head over to https://oasisinet.com for info and links to official ticket agents. Make it your first and only starting point.
In the coming days and in the coming months leading up to the shows, expect to see all manner of ticket scams. Yet given the way that concert organizers have structured the shows, you can quickly spot an Oasis ticket scam by looking out for the following:
Scammers can easily create phony social media profiles and ads. Likewise, they can easily use them to sell phony tickets. As always, stick with official ticketing platforms. They sell legitimate tickets and offer legitimate purchase protection.
Related, scammers on social media and elsewhere online will require payment with bank transfers, gift cards, and even cryptocurrency — all payment methods that are tough to recoup in a scam. If you spot this, you’ve spotted a scam.
As pointed out, ticket resales will be at face value plus a booking fee. Any tickets of higher price, or lower for that matter, will be phonies.
Other scams we expect to see will revolve around Oasis merch – shirts, hats, phone cases, you name it. While we don’t have a view into what official merchandise sales will look like, scammers will certainly look to push their share of knockoff or non-existent merch online.
For fans looking for tour merch, you can shop safely with a few straightforward steps:
This is a great one to start with. Directly typing in the correct address for reputable online stores and retailers is a prime way to avoid scammers online. Watch out for sites that spoof legit sites by copying their look and feel, which use addresses that often look like legitimate addresses — but aren’t. You’ll see phony sites such as these crop up in search results and in social media ads and posts.
Secure websites begin their address with “https,” not just “http.” That extra “s” in stands for “secure,” which means that it uses a secure protocol for transmitting sensitive info like passwords, credit card numbers, and the like over the internet. It often appears as a little padlock icon in the address bar of your browser, so double-check for that. If you don’t see that it’s secure, it’s best to avoid making purchases on that website.
Credit cards are a good way to go. In the UK, Section 75 of the Consumer Credit Act protects purchases made with a credit card that cost between £100 and £30,000. In the U.S., the Fair Credit Billing Act offers protection against fraudulent charges on credit cards by giving you the right to dispute charges over $50 for undelivered goods and services or otherwise billed incorrectly. Your credit card companies might have their own policies that improve upon these Acts as well.
Comprehensive online protection with McAfee+ will defend against the latest virus, malware, spyware, and ransomware attacks plus further protect your privacy and identity. In addition to this, it can also provide strong password protection by generating and automatically storing complex passwords to keep your credentials safer from hackers and crooks who might try to force their way into your accounts. And, specific to all the Oasis scams that will inevitably pop up, online protection can help prevent you from clicking links to known or suspected malicious sites. In addition, select plans of McAfee+ offer up to $2 million in identity theft coverage along with identity restoration support and lost wallet protection if needed.
The post Wonderwall of Lies: How to Avoid Oasis Reunion Ticket Scams appeared first on McAfee Blog.
A safer internet isn’t a nice thing to have. It’s a necessity because we rely on it so heavily. And there’s plenty we can do to make it happen.
A safer internet might seem like it’s a bit out of our hands as individuals. The truth is that each of us plays a major role in making it so. As members, contributors, and participants who hop on the internet daily, our actions can make the internet a safer place.
So, specifically, what can we do? Take a few moments to ponder the questions that follow. Using them can help frame your thinking about internet safety and how you can make yourself, and others, safer.
Device safety is relatively straightforward provided you take the steps to ensure it. You can protect your things with comprehensive online protection like our McAfee+ plans, you can update your devices and apps, and you can use strong, unique passwords with the help of a password manager.
Put another way, internet safety is another way to keep your house in shape. Just as you mow your lawn, swap out the batteries in your smoke alarm, or change the filters in your heating system, much goes the same for the way you should look after computers, tablets, phones, and connected devices in your home. They need your regular care and maintenance as well. Again, good security software can handle so much of this automatically or with relatively easy effort on your part.
If you’re wondering where to start with looking after the security of your devices, check out our article on how to become an IT pro in your home. It makes the process easy by breaking down the basics into steps that build your confidence along the way.
This includes all kinds of topics. The range covers identity theft, protecting your personal info, privacy, cyberbullying, screen time, when to get a smartphone for your child, and learning how to spot scams online. Just to name a few. And if you visit our blogs from time to time, you see that we cover those and other topics in detail. It offers a solid resource any time you have questions.
Certainly, you have tools that can give you a big hand with those concerns. That includes virtual private networks (VPNs) that encrypt your personal info, built-in browser advisors that help you search and surf safely, plus scam protection that lets you know when sketchy links pop up in emails and messages.
However, internet safety goes beyond devices. It’s a mindset. As with driving a car, so much of our online safety relies on our behaviors and good judgment. For example, one piece of research found that ninety-one percent of all cyberattacks start with phishing emails.i
As Tomas Holt, professor of criminal justice at Michigan State University, states, “An individual’s characteristics are critical in studying how cybercrime perseveres, particularly the person’s impulsiveness and the activities that they engage in while online that have the greatest impact on their risk.”
Put another way, scammers bank on an itchy clicker-finger — where a quick click opens the door for an attack. Educating your family about the risks out there, such as phishing attacks and sketchy links that crop up in search goes a long way to keep everyone out of trouble. In combination with online protection software like ours covers the rest of the way.
A big part of a safer internet is us. Specifically, how we treat each other — and how we project ourselves to friends, family, and the wider internet. With so much of our communication happening online through the written word or posted pictures, all of it creates a climate around each of us. It can take on an uplifting air or mire you in a cloud of negativity. What’s more, it’s largely out there for all to see. Especially on social media.
Take time to pause and reflect on your climate. A good place to start is with basic etiquette. Verywell Family put together an article on internet etiquette for kids, yet when you give it a close read, you’ll see that it provides good advice for everyone.ii
In summary, their advice focuses on five key points:
Of course, the flip side to all of this is what to do when someone targets you with their bad behavior. Such as when an online troll who hurls hurtful or malicious comments your way. That’s a topic in itself. Check out our article on internet trolls and how to handle them. Once again, the advice there is great for everyone in the family.
We’ve shared quite a bit of info in this article and loaded it up with plenty of helpful links too. Don’t feel like you have to take care of everything in one sitting. See what you have in place and make notes about where you’d like to make improvements. Then, start working down the list. A few minutes each week dedicated to your security can greatly increase your security, safety, and savvy.
[i] https://www.darkreading.com/endpoint/91–of-cyberattacks-start-with-a-phishing-email/d/d-id/1327704
[ii] https://www.verywellfamily.com/things-to-teach-your-kids-about-digital-etiquette-460548
The post Internet Safety Begins with All of Us appeared first on McAfee Blog.
Phishing attacks have all kinds of lures. And many are so tried and true that it makes them easy to spot.
The target of a phishing attack is you. More specifically, your personal info and your money. Whether a scammer reaches out by email, with a text, or through a direct message, that’s what they’re after. And with a link, they whisk you off to a sketchy site designed to take them from you.
Just how much phishing is going on? To date, we’ve identified more than half a billion malicious sites out there. A number that grows daily. Because these attacks often succeed. One big reason why — they play on people’s emotions.
Phishing attacks always involve a form of “social engineering,” which is an academic way of saying that scammers use manipulation in their attacks. Commonly, scammers pretend to be a legitimate person or business.
You can get a better idea of how this works by learning about some of the most popular scams circulating today:
The CEO Scam
This scam appears as an email from a leader in your organization, asking for highly sensitive info like company accounts, employee salaries, and Social Security numbers. The hackers “spoof”, or fake, the boss’ email address so it looks like a legitimate internal company email. That’s what makes this scam so convincing — the lure is that you want to do your job and please your boss. But keep this scam in mind if you receive an email asking for confidential or highly sensitive info. Ask the apparent sender directly whether the request is real before acting.
The Urgent Email Attachment
Phishing emails that try to trick you into downloading a dangerous attachment that can infect your computer and steal your private info have been around for a long time. This is because they work. You’ve probably received emails asking you to download attachments confirming a package delivery, trip itinerary, or prize. They might urge you to “respond immediately!” The lure here is offering you something you want and invoking a sense of urgency to get you to click.
The “Lucky” Text or Email
How fortunate! You’ve won a free gift, an exclusive service, or a great deal on a trip to Las Vegas. Just remember, whatever “limited time offer” you’re being sold, it’s probably a phishing scam designed to get you to give up your credit card number or identity info. The lure here is something free or exciting at what appears to be little or no cost to you.
The Romance Scam
This one can happen completely online, over the phone, or in person after contact is established. But the romance scam always starts with someone supposedly looking for love. The scammer often puts a phony ad online or poses as a friend-of-a-friend on social media and contacts you directly. But what starts as the promise of love or partnership, often leads to requests for money or pricey gifts. The scammer will sometimes spin a hardship story, saying they need to borrow money to come visit you or pay their phone bill so they can stay in touch. The lure here is simple — love and acceptance.
While you can’t outright stop phishing attacks from making their way to your computer or phone, you can do several things to keep yourself from falling for them. Further, you can do other things that might make it more difficult for scammers to reach you.
The content and the tone of the message can tell you quite a lot. Threatening messages or ones that play on fear are often phishing attacks, such as angry messages from a so-called tax agent looking to collect back taxes. Other messages will lean heavily on urgency, like a phony overdue payment notice. And during the holidays, watch out for loud, overexcited messages about deep discounts on hard-to-find items. Instead of linking you to a proper e-commerce site, they might link you to a scam shopping site that does nothing but steal your money and the account info you used to pay them. In all, phishing attacks indeed smell fishy. Slow down and review that message with a critical eye. It might tip you off to a scam.
Some phishing attacks can look rather convincing. So much so that you’ll want to follow up on them, like if your bank reports irregular activity on your account or a bill appears to be past due. In these cases, don’t click on the link in the message. Go straight to the website of the business or organization in question and access your account from there. Likewise, if you have questions, you can always reach out to their customer service number or web page.
When scammers contact you via social media, that can be a tell-tale sign of a scam. Consider, would an income tax collector contact you over social media? The answer there is no. For example, in the U.S. the Internal Revenue Service (IRS) makes it clear that they will never contact taxpayers via social media. (Let alone send angry, threatening messages.) In all, legitimate businesses and organizations don’t use social media as a channel for official communications. They’ve accepted ways they will, and will not, contact you. If you have any doubts about a communication you received, contact the business or organization in question directly. Follow up with one of their customer service representatives.
Some phishing attacks involve attachments packed with malware, like ransomware, viruses, and keyloggers. If you receive a message with such an attachment, delete it. Even if you receive an email with an attachment from someone you know, follow up with that person. Particularly if you weren’t expecting an attachment from them. Scammers often hijack or spoof email accounts of everyday people to spread malware.
On computers and laptops, you can hover your cursor over links without clicking on them to see the web address. Take a close look at the addresses the message is using. If it’s an email, look at the email address. Maybe the address doesn’t match the company or organization at all. Or maybe it looks like it almost does, yet it adds a few letters or words to the name. This marks yet another sign that you might have a phishing attack on your hands. Scammers also use the common tactic of a link shortener, which creates links that almost look like strings of indecipherable text. These shortened links mask the true address, which might indeed be a link to a scam site. Delete the message. If possible, report it. Many social media platforms and messaging apps have built-in controls for reporting suspicious accounts and messages.
On social media and messaging platforms, stick to following, friending, and messaging people who you really know. As for those people who contact you out of the blue, be suspicious. Sad to say, they’re often scammers canvassing these platforms for victims. Better yet, where you can, set your profile to private, which makes it more difficult for scammers to select and stalk you for an attack.
How’d that scammer get your phone number or email address anyway? Chances are, they pulled that info off a data broker site. Data brokers buy, collect, and sell detailed personal info, which they compile from several public and private sources, such as local, state, and federal records, plus third parties like supermarket shopper’s cards and mobile apps that share and sell user data. Moreover, they’ll sell it to anyone who pays for it, including people who’ll use that info for scams. You can help reduce those scam texts and calls by removing your info from those sites. Our Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info.
Online protection software can protect you in several ways. First, it can offer web protection features that can identify malicious links and downloads, which can help prevent clicking them. Further, features like our web protection can steer you away from dangerous websites and block malware and phishing sites if you accidentally click on a malicious link. Additionally, our Scam Protection feature warns you of sketchy links in emails, texts, and messages. And overall, strong virus and malware protection can further block any attacks on your devices. Be sure to protect your smartphones in addition to your computers and laptops as well, particularly given all the sensitive things we do on them, like banking, shopping, and booking rides and travel.
The post How to Spot Phishing Lures appeared first on McAfee Blog.
Tapping your phone at the cash register makes for a smooth trip to the store. Far smoother than fumbling for your card at the checkout or dealing with a bunch of change. That’s the beauty of the digital wallet on your phone. And with that convenience comes something plenty important — keeping that digital wallet secure.
All the personal info, photos, and banking apps we already have on our phones already make them plenty valuable. A digital wallet makes them that much more valuable.
A few steps can keep your phone and digital wallet more secure. Further, other steps can protect your cards and identity if that phone gets lost or stolen.
Let’s start with a look at how digital wallets work.
For starters, digital wallets work much like a physical wallet. Through service apps like Apple Pay, Google Pay, Samsung Pay, PayPal, and others, you can store various payment types. That includes debit cards, credit cards, gift cards, and bank accounts.
The transaction is highly secure in general. When you use your digital wallet to make a purchase, the app creates a random ID for the transaction. It uses that ID rather than your actual account number to keep things secure. Encryption technology keeps things safer still by scrambling info during the process.
A digital wallet is safe, as long as you guard your smartphone just as closely as you would your physical wallet.
Here’s why you should secure your digital wallet and three tips to help you do so.
Fewer people use a lock screen than you might think. A finding from our global research showed that only 56% of adults said that they protect their smartphone with a password or passcode.[i] The problem with going unlocked is that if the phone gets lost or stolen, you’ve handed over a large part of your digital life to a thief. Setting up a lock screen is easy. It’s a simple feature found on iOS and Android devices.
Always protect your digital wallet with a lock, whether a unique passcode, fingerprint scan, or facial ID. This is the best and easiest way to deter cybercriminals. If you use a numerical code, make it different from the passcode on your phone. Also, make sure the numbers are random. Birthdays, anniversaries, house addresses, and the last digits of your phone number are all popular combinations and are crackable codes to a resourceful criminal.
Another way to secure your digital wallet is to make sure you always download the latest software updates. Developers are constantly finding and patching security holes, so the most up-to-date software is often the most secure. Turn on automatic updates to ensure you never miss a new release.
Before you swap your plastic cards for digital payment methods, ensure you research the digital banking app before downloading. Also, ensure that any app you download is through the official Apple or Android store or the financial institution’s official website. Then, check out how many downloads and reviews the app has. That’s one way you can make sure you’re downloading an official app and not an imposter. While most of the apps on official stores are legitimate, it’s always smart to check for typos, blurry logos, and unprofessional app descriptions.
So what happens if your phone ends up getting lost or stolen? A combination of device tracking, device locking, and remote erasing can help protect your phone and the data on it. Different device manufacturers have different ways of going about it, but the result is the same — you can prevent others from using your phone. You can even erase it if you’re truly worried that it’s in the wrong hands or if it’s gone for good. Apple provides iOS users with a step-by-step guide, and Google offers up a guide for Android users as well.
No doubt about it. Our phones get more and more valuable as the years go by. With an increasing amount of our financial lives coursing through them, protecting our phones becomes that much more important.
Comprehensive online protection like our McAfee+ plans can protect your phone. And it can protect something else. You. Namely, your privacy and your identity. Here’s a quick rundown: It can …
Protection like this is worth looking into, particularly as our phones become yet more valuable still thanks to digital wallets and payment apps like them.
[i] https://www.mcafee.com/content/dam/consumer/en-us/docs/reports/rp-connected-family-study-2022-global.pdf
The post How to Secure Your Digital Wallet appeared first on McAfee Blog.
In today’s digital age, the line between reality and digital fabrication is increasingly blurred, thanks to the rise of deepfake technology. Deepfakes, sophisticated audio manipulations, are becoming a growing concern as they become more realistic and harder to detect. The impact of a deepfake scam can be life-altering, with victims reporting losses ranging from $250 to over half a million dollars. And while not all AI content is created with malicious intent, the ability to know if a video is real or fake helps consumers make smart and well-informed decisions.
“Knowledge is power, and this has never been more true than in the AI-driven world we’re living in today,” said Roma Majumder, Senior Vice President of Product at McAfee. “No more wondering, is this Warren Buffet investment scheme legitimate, does Taylor Swift really want to give away cookware to fans, or did a politician actually say these words? The answers are provided to you automatically and within seconds with McAfee Deepfake Detector.”
“At McAfee, we’re inspired by the transformative potential of AI and are committed to helping shape a future where AI is used for good. Teaming up with Lenovo boosts our ability to deliver the most effective, automated, AI-powered deepfake detection, offering people a powerful digital guardian on their PCs. Together, we’re able to harness AI in new and revolutionary ways, empowering individuals with the most advanced deepfake detection so they can navigate the evolving online world safely and confidently.”
Recognizing the urgency of this issue, McAfee and Lenovo have come together to empower consumers with privacy-focused, cutting-edge technology designed to identify these deceptive creations and tackle consumer concerns around identifying deepfake scams and misinformation.
“The collaboration between Lenovo and McAfee combines the unique expertise of two global leaders to deliver innovative solutions that offers consumers more trust in the content they view online,” said Igor Bergman, Vice President of Lenovo Cloud and Software, Intelligent Devices Group. “Data shows that nearly two-thirds of people (64%) are more concerned about deepfakes now than they were a year ago. Lenovo’s expertise as an end-to-end technology solutions leader and McAfee’s experience in AI-powered online protection perfectly complement each other, optimizing hardware and software capabilities for the benefit of the consumer.”
In today’s digital landscape, where social media and viral content dominate, distinguishing between what’s real and what’s fabricated online is becoming increasingly challenging. Deepfakes, a term that combines ‘deep learning’ and ‘fake’, are hyper-realistic videos or images created using artificial intelligence to deceive viewers.
Imagine seeing a video of your favorite celebrity in a film they never acted in, or a politician delivering a speech they never actually gave. This is the realm of deepfakes. By utilizing AI, creators can manipulate faces, alter voices, and choreograph actions that never occurred. While some deepfakes are created for entertainment, like humorous videos of talking pets, others serve more sinister purposes. They can be tools for spreading false information, influencing political views, or damaging reputations.
Here are a few ways harmful deepfakes can impact us:
By staying informed and scrutinizing media before sharing, you can improve your ability to spot fakes and reduce the risk of falling victim to these sophisticated scams.
With McAfee Deepfake Detector now available exclusively on select Lenovo AI PCs, consumers who opt in are alerted within seconds if AI-altered audio is detected in videos, without relying on laborious manual video uploads. Trained on close to 200,000 samples and counting and leveraging the power of select Lenovo AI PCs equipped with an NPU, McAfee’s AI detection models perform the entire identification process – known as inference – directly on the PC, maximizing on-device processing to keep private user data off the cloud. McAfee does not collect or record a user’s audio in any way, and the user is always in control and can turn audio detection on or off as desired. McAfee’s powerful AI technology, built with privacy in mind, equips consumers with advanced AI detection, with a 96% accuracy rate, to combat the rise in AI-generated scams, deepfakes, and misinformation.
By leveraging the NPU and performing analysis on-device, McAfee provides comprehensive privacy and boosts processing speed when compared to cloud-based usage and improves battery life. These advancements significantly enhance the consumer experience, allowing people to make informed decisions about the content they view and protecting them against cybercrooks manipulating video audio without compromising the speed of their PC. This ensures consumers can use their PC as usual – whether they’re gaming, browsing, or watching videos – while McAfee Deepfake Detector works quietly in the background, protecting people against deceptions and alerting them to potential scams without compromising performance.
The McAfee Smart AI Hub at McAfee.ai is the online, go-to destination for the latest information and educational content related to AI and cybersecurity, with a focus on deepfakes and AI-driven scams. The Hub also empowers consumers to join the fight against scams by submitting suspicious videos for analysis by McAfee’s advanced AI-powered deepfake detection technology. Insights and trends identified through this analysis will be used to further educate the public, enriching societal understanding and awareness of deepfakes and other artificially generated content, and enhancing everyone’s ability to navigate and stay safe in a digital world increasingly shaped by artificial intelligence.
McAfee Deepfake Detector is available for English language detection in select new Lenovo AI PCs, ordered on Lenovo.com and select local retailers beginning August 21, 2024, in the US, UK, and Australia.
Lenovo AI PC customers receive a free 30-day trial of McAfee Deepfake Detector with US pricing starting at $9.99 for the first year.
The post Introducing World’s First Automatic and AI-powered Deepfake Detector appeared first on McAfee Blog.
How do you recognize phishing emails and texts? Even as many of the scammers behind them have sophisticated their attacks, you can still pick out telltale signs.
Common to them all, every phishing is a cybercrime that aims to steal your sensitive info. Personal info. Financial info. Other attacks go right for your wallet by selling bogus goods or pushing phony charities.
You’ll find scammers posing as major corporations, friends, business associates, and more. They might try to trick you into providing info like website logins, credit and debit card numbers, and even precious personal info like your Social Security Number.
Phishing scammers often undo their own plans by making simple mistakes that are easy to spot once you know how to recognize them. Check for the following signs of phishing when you open an email or check a text:
It’s poorly written.
Even the biggest companies sometimes make minor errors in their communications. Phishing messages often contain grammatical errors, spelling mistakes, and other blatant errors that major corporations wouldn’t make. If you see glaring grammatical errors in an email or text that asks for your personal info, you might be the target of a phishing scam.
The logo doesn’t look right.
Phishing scammers often steal the logos of the businesses they impersonate. However, they don’t always use them correctly. The logo in a phishing email or text might have the wrong aspect ratio or low resolution. If you have to squint to make out the logo in a message, the chances are that it’s phishing.
The URL doesn’t match.
Phishing always centers around links that you’re supposed to click or tap. Here are a few ways to check whether a link someone sent you is legitimate:
You can also spot a phishing attack when you know what some of the most popular scams are:
The CEO Scam
This scam appears as an email from a leader in your organization, asking for highly sensitive info like company accounts, employee salaries, and Social Security numbers. The hackers “spoof”, or fake, the boss’ email address so it looks like a legitimate internal company email. That’s what makes this scam so convincing — the lure is that you want to do your job and please your boss. But keep this scam in mind if you receive an email asking for confidential or highly sensitive info. Ask the apparent sender directly whether the request is real before acting.
The Urgent Email Attachment
Phishing emails that try to trick you into downloading a dangerous attachment that can infect your computer and steal your private info have been around for a long time. This is because they work. You’ve probably received emails asking you to download attachments confirming a package delivery, trip itinerary, or prize. They might urge you to “respond immediately!” The lure here is offering you something you want and invoking a sense of urgency to get you to click.
The “Lucky” Text or Email
How fortunate! You’ve won a free gift, an exclusive service, or a great deal on a trip to Las Vegas. Just remember, whatever “limited time offer” you’re being sold, it’s probably a phishing scam designed to get you to give up your credit card number or identity info. The lure here is something free or exciting at what appears to be little or no cost to you.
The Romance Scam
This one can happen completely online, over the phone, or in person after contact is established. But the romance scam always starts with someone supposedly looking for love. The scammer often puts a phony ad online or poses as a friend-of-a-friend on social media and contacts you directly. But what starts as the promise of love or partnership, often leads to requests for money or pricey gifts. The scammer will sometimes spin a hardship story, saying they need to borrow money to come visit you or pay their phone bill so they can stay in touch. The lure here is simple — love and acceptance.
Account Suspended Scam
Some phishing emails appear to notify you that your bank temporarily suspended your account due to unusual activity. If you receive an account suspension email from a bank that you haven’t opened an account with, delete it immediately, and don’t look back. Suspended account phishing emails from banks you do business with, however, are harder to spot. Use the methods we listed above to check the email’s integrity, and if all else fails, contact your bank directly instead of opening any links within the email you received.
While you can’t outright stop phishing attacks from making their way to your computer or phone, you can do several things to keep yourself from falling for them. Further, you can do other things that might make it more difficult for scammers to reach you.
The content and the tone of the message can tell you quite a lot. Threatening messages or ones that play on fear are often phishing attacks, such as angry messages from a so-called tax agent looking to collect back taxes. Other messages will lean heavily on urgency, like a phony overdue payment notice. And during the holidays, watch out for loud, overexcited messages about deep discounts on hard-to-find items. Instead of linking you off to a proper e-commerce site, they might link you to a scam shopping site that does nothing but steal your money and the account info you used to pay them. In all, phishing attacks indeed smell fishy. Slow down and review that message with a critical eye. It might tip you off to a scam.
Some phishing attacks can look rather convincing. So much so that you’ll want to follow up on them, like if your bank reports irregular activity on your account or a bill appears to be past due. In these cases, don’t click on the link in the message. Go straight to the website of the business or organization in question and access your account from there. Likewise, if you have questions, you can always reach out to their customer service number or web page.
Some phishing attacks occur in social media messengers. When you get direct messages, consider the source. Consider, would an income tax collector contact you over social media? The answer there is no. For example, in the U.S. the Internal Revenue Service (IRS) makes it clear that they will never contact taxpayers via social media. (Let alone send angry, threatening messages.) In all, legitimate businesses and organizations don’t use social media as a channel for official communications. They’ve accepted ways they will, and will not, contact you. If you have any doubts about a communication you received, contact the business or organization in question directly. Follow up with one of their customer service representatives.
Some phishing attacks involve attachments packed with malware, like ransomware, viruses, and keyloggers. If you receive a message with such an attachment, delete it. Even if you receive an email with an attachment from someone you know, follow up with that person. Particularly if you weren’t expecting an attachment from them. Scammers often hijack or spoof email accounts of everyday people to spread malware.
How’d that scammer get your phone number or email address anyway? Chances are, they pulled that info off a data broker site. Data brokers buy, collect, and sell detailed personal info, which they compile from several public and private sources, such as local, state, and federal records, plus third parties like supermarket shopper’s cards and mobile apps that share and sell user data. Moreover, they’ll sell it to anyone who pays for it, including people who’ll use that info for scams. You can help reduce those scam texts and calls by removing your info from those sites. Our Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info.
Online protection software can protect you in several ways. First, it can offer web protection features that can identify malicious links and downloads, which can help prevent clicking them. Further, features like our web protection can steer you away from dangerous websites and block malware and phishing sites if you accidentally click on a malicious link. Additionally, our Scam Protection feature warns you of sketchy links in emails, texts, and messages. And overall, strong virus and malware protection can further block any attacks on your devices. Be sure to protect your smartphones in addition to your computers and laptops as well, particularly given all the sensitive things we do on them, like banking, shopping, and booking rides and travel.
The post How to Recognize a Phishing Email appeared first on McAfee Blog.