Private tech companies gather tremendous amounts of user data. These companies can afford to let you use social media platforms free of charge because it’s paid for by your data, attention, and time.
Big tech derives most of its profits by selling your attention to advertisers — a well-known business model. Various documentaries (like Netflix’s “The Social Dilemma”) have tried to get to the bottom of the complex algorithms that big tech companies employ to mine and analyze user data for the benefit of third-party advertisers.
Tech companies benefit from personal info by being able to provide personalized ads. When you click “yes” at the end of a terms and conditions agreement found on some web pages, you might be allowing the companies to collect the following data:
For someone unfamiliar with privacy issues, it is important to understand the extent of big tech’s tracking and data collection. After these companies collect data, all this info can be supplied to third-party businesses or used to improve user experience.
The problem with this is that big tech has blurred the line between collecting customer data and violating user privacy in some cases. While tracking what content you interact with can be justified under the garb of personalizing the content you see, big tech platforms have been known to go too far. Prominent social networks like Facebook and LinkedIn have faced legal trouble for accessing personal user data like private messages and saved photos.
The info you provide helps build an accurate character profile and turns it into knowledge that gives actionable insights to businesses. Private data usage can be classified into three cases: selling it to data brokers, using it to improve marketing, or enhancing customer experience.
To sell your info to data brokers
Along with big data, another industry has seen rapid growth: data brokers. Data brokers buy, analyze, and package your data. Companies that collect large amounts of data on their users stand to profit from this service. Selling data to brokers is an important revenue stream for big tech companies.
Advertisers and businesses benefit from increased info on their consumers, creating a high demand for your info. The problem here is that companies like Facebook and Alphabet (Google’s parent company) have been known to mine massive amounts of user data for the sake of their advertisers.
To personalize marketing efforts
Marketing can be highly personalized thanks to the availability of large amounts of consumer data. Tracking your response to marketing campaigns can help businesses alter or improve certain aspects of their campaign to drive better results.
The problem is that most AI-based algorithms are incapable of assessing when they should stop collecting or using your info. After a point, users run the risk of being constantly subjected to intrusive ads and other unconsented marketing campaigns that pop up frequently.
To cater to the customer experience
Analyzing consumer behavior through reviews, feedback, and recommendations can help improve customer experience. Businesses have access to various facets of data that can be analyzed to show them how to meet consumer demands. This might help improve any part of a consumer’s interaction with the company, from designing special offers and discounts to improving customer relationships.
For most social media platforms, the goal is to curate a personalized feed that appeals to users and allows them to spend more time on the app. When left unmonitored, the powerful algorithms behind these social media platforms can repeatedly subject you to the same kind of content from different creators.
Here are the big tech companies that collect and mine the most user data.
Users need a comprehensive data privacy solution to tackle the rampant, large-scale data mining carried out by big tech platforms. While targeted advertisements and easily found items are beneficial, many of these companies collect and mine user data through several channels simultaneously, exploiting them in several ways.
It’s important to ensure your personal info is protected. Protection solutions like McAfee’s Personal Data Cleanup feature can help. It scours the web for traces of your personal info and helps remove it for your online privacy.
McAfee+ provides antivirus software for all your digital devices and a secure VPN connection to avoid exposure to malicious third parties while browsing the internet. Our Identity Monitoring and personal data removal solutions further remove gaps in your devices’ security systems.
With our data protection and custom guidance (complete with a protection score for each platform and tips to keep you safer), you can be sure that your internet identity is protected.
The post What Personal Data Do Companies Track? appeared first on McAfee Blog.
Two-step verification, two-factor authentication, multi-factor authentication…whatever your social media platform calls it, it’s an excellent way to protect your accounts.
There’s a good chance you’re already using multi-factor verification with your other accounts — for your bank, your finances, your credit card, and any number of things. The way it requires an extra one-time code in addition to your login and password makes life far tougher for hackers.
It’s increasingly common to see nowadays, where all manner of online services only allow access to your accounts after you’ve provided a one-time passcode sent to your email or smartphone. That’s where two-step verification comes in. You get sent a code as part of your usual login process (usually a six-digit number), and then you enter that along with your username and password.
Some online services also offer the option to use an authenticator app, which sends the code to a secure app rather than via email or your smartphone. Authenticator apps work much in the same way, yet they offer three unique features:
Google, Microsoft, and others offer authenticator apps if you want to go that route. You can get a good list of options by checking out the “editor’s picks” at your app store or in trusted tech publications.
Whichever form of authentication you use, always keep that secure code to yourself. It’s yours and yours alone. Anyone who asks for that code, say someone masquerading as a customer service rep, is trying to scam you. With that code, and your username/password combo, they can get into your account.
Passwords and two-step verification work hand-in-hand to keep you safer. Yet not any old password will do. You’ll want a strong, unique password. Here’s how that breaks down:
Now, with strong passwords in place, you can get to setting up multi-factor verification on your social media accounts.
When you set up two-factor authentication on Facebook, you’ll be asked to choose one of three security methods:
And here’s a link to the company’s full walkthrough: https://www.facebook.com/help/148233965247823
When you set up two-factor authentication on Instagram, you’ll be asked to choose one of three security methods: an authentication app, text message, or WhatsApp.
And here’s a link to the company’s full walkthrough: https://help.instagram.com/566810106808145
And here’s a link to the company’s full walkthrough: https://faq.whatsapp.com/1920866721452534
And here’s a link to the company’s full walkthrough: https://support.google.com/accounts/answer/185839?hl=en&co=GENIE.Platform%3DDesktop
1. TapProfileat the bottom of the screen.
2. Tap the Menu button at the top.
3. Tap Settings and Privacy, then Security.
4. Tap 2-step verification and choose at least two verification methods: SMS (text), email, and authenticator app.
5. Tap Turn on to confirm.
And here’s a link to the company’s full walkthrough: https://support.tiktok.com/en/account-and-privacy/personalized-ads-and-data/how-your-phone-number-is-used-on-tiktok
The post How to Protect Your Social Media Passwords with Multi-factor Verification appeared first on McAfee Blog.
How do you recognize phishing emails and texts? Even as many of the scammers behind them have sophisticated their attacks, you can still pick out telltale signs.
Common to them all, every phishing is a cybercrime that aims to steal your sensitive info. Personal info. Financial info. Other attacks go right for your wallet by selling bogus goods or pushing phony charities.
You’ll find scammers posing as major corporations, friends, business associates, and more. They might try to trick you into providing info like website logins, credit and debit card numbers, and even precious personal info like your Social Security Number.
Phishing scammers often undo their own plans by making simple mistakes that are easy to spot once you know how to recognize them. Check for the following signs of phishing when you open an email or check a text:
It’s poorly written.
Even the biggest companies sometimes make minor errors in their communications. Phishing messages often contain grammatical errors, spelling mistakes, and other blatant errors that major corporations wouldn’t make. If you see glaring grammatical errors in an email or text that asks for your personal info, you might be the target of a phishing scam.
The logo doesn’t look right.
Phishing scammers often steal the logos of the businesses they impersonate. However, they don’t always use them correctly. The logo in a phishing email or text might have the wrong aspect ratio or low resolution. If you have to squint to make out the logo in a message, the chances are that it’s phishing.
The URL doesn’t match.
Phishing always centers around links that you’re supposed to click or tap. Here are a few ways to check whether a link someone sent you is legitimate:
You can also spot a phishing attack when you know what some of the most popular scams are:
The CEO Scam
This scam appears as an email from a leader in your organization, asking for highly sensitive info like company accounts, employee salaries, and Social Security numbers. The hackers “spoof”, or fake, the boss’ email address so it looks like a legitimate internal company email. That’s what makes this scam so convincing — the lure is that you want to do your job and please your boss. But keep this scam in mind if you receive an email asking for confidential or highly sensitive info. Ask the apparent sender directly whether the request is real before acting.
The Urgent Email Attachment
Phishing emails that try to trick you into downloading a dangerous attachment that can infect your computer and steal your private info have been around for a long time. This is because they work. You’ve probably received emails asking you to download attachments confirming a package delivery, trip itinerary, or prize. They might urge you to “respond immediately!” The lure here is offering you something you want and invoking a sense of urgency to get you to click.
The “Lucky” Text or Email
How fortunate! You’ve won a free gift, an exclusive service, or a great deal on a trip to Las Vegas. Just remember, whatever “limited time offer” you’re being sold, it’s probably a phishing scam designed to get you to give up your credit card number or identity info. The lure here is something free or exciting at what appears to be little or no cost to you.
The Romance Scam
This one can happen completely online, over the phone, or in person after contact is established. But the romance scam always starts with someone supposedly looking for love. The scammer often puts a phony ad online or poses as a friend-of-a-friend on social media and contacts you directly. But what starts as the promise of love or partnership, often leads to requests for money or pricey gifts. The scammer will sometimes spin a hardship story, saying they need to borrow money to come visit you or pay their phone bill so they can stay in touch. The lure here is simple — love and acceptance.
Account Suspended Scam
Some phishing emails appear to notify you that your bank temporarily suspended your account due to unusual activity. If you receive an account suspension email from a bank that you haven’t opened an account with, delete it immediately, and don’t look back. Suspended account phishing emails from banks you do business with, however, are harder to spot. Use the methods we listed above to check the email’s integrity, and if all else fails, contact your bank directly instead of opening any links within the email you received.
While you can’t outright stop phishing attacks from making their way to your computer or phone, you can do several things to keep yourself from falling for them. Further, you can do other things that might make it more difficult for scammers to reach you.
The content and the tone of the message can tell you quite a lot. Threatening messages or ones that play on fear are often phishing attacks, such as angry messages from a so-called tax agent looking to collect back taxes. Other messages will lean heavily on urgency, like a phony overdue payment notice. And during the holidays, watch out for loud, overexcited messages about deep discounts on hard-to-find items. Instead of linking you off to a proper e-commerce site, they might link you to a scam shopping site that does nothing but steal your money and the account info you used to pay them. In all, phishing attacks indeed smell fishy. Slow down and review that message with a critical eye. It might tip you off to a scam.
Some phishing attacks can look rather convincing. So much so that you’ll want to follow up on them, like if your bank reports irregular activity on your account or a bill appears to be past due. In these cases, don’t click on the link in the message. Go straight to the website of the business or organization in question and access your account from there. Likewise, if you have questions, you can always reach out to their customer service number or web page.
Some phishing attacks occur in social media messengers. When you get direct messages, consider the source. Consider, would an income tax collector contact you over social media? The answer there is no. For example, in the U.S. the Internal Revenue Service (IRS) makes it clear that they will never contact taxpayers via social media. (Let alone send angry, threatening messages.) In all, legitimate businesses and organizations don’t use social media as a channel for official communications. They’ve accepted ways they will, and will not, contact you. If you have any doubts about a communication you received, contact the business or organization in question directly. Follow up with one of their customer service representatives.
Some phishing attacks involve attachments packed with malware, like ransomware, viruses, and keyloggers. If you receive a message with such an attachment, delete it. Even if you receive an email with an attachment from someone you know, follow up with that person. Particularly if you weren’t expecting an attachment from them. Scammers often hijack or spoof email accounts of everyday people to spread malware.
How’d that scammer get your phone number or email address anyway? Chances are, they pulled that info off a data broker site. Data brokers buy, collect, and sell detailed personal info, which they compile from several public and private sources, such as local, state, and federal records, plus third parties like supermarket shopper’s cards and mobile apps that share and sell user data. Moreover, they’ll sell it to anyone who pays for it, including people who’ll use that info for scams. You can help reduce those scam texts and calls by removing your info from those sites. Our Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info.
Online protection software can protect you in several ways. First, it can offer web protection features that can identify malicious links and downloads, which can help prevent clicking them. Further, features like our web protection can steer you away from dangerous websites and block malware and phishing sites if you accidentally click on a malicious link. Additionally, our Scam Protection feature warns you of sketchy links in emails, texts, and messages. And overall, strong virus and malware protection can further block any attacks on your devices. Be sure to protect your smartphones in addition to your computers and laptops as well, particularly given all the sensitive things we do on them, like banking, shopping, and booking rides and travel.
The post How to Recognize a Phishing Email appeared first on McAfee Blog.
Before your phone gets lost or stolen, put some basic steps in place.
You’ll want to act quickly, so preparation is everything. With the right measures, you can find it, recover it, or even erase it if needed. These steps can get you set up so you can do exactly that.
Lock your phone.
Locking your phone is one of the most basic smartphone security measures you can take. Trouble is, few of us do it. Our recent global research showed that only 56% of adults said that they protect their smartphone with a password, passcode, or other form of lock.[i] In effect, an unlocked phone is an open book to anyone who finds or steals a phone
Setting up a lock screen is easy. It’s a simple feature found on iOS and Android devices. iPhones and Androids have an auto-lock feature that locks your phone after a certain period of inactivity. Keep this time on the low end, one minute or less, to help prevent unauthorized access.
We suggest using a six-digit PIN or passcode rather than using a gesture to unlock your phone. They’re more complex and secure. Researchers proved as much with a little “shoulder surfing” test. They looked at how well one group of subjects could unlock a phone after observing the way another group of subjects unlocked it.[ii]
Turn on “Find My Phone.”
Another powerful tool you have at your disposal is the Find My Phone feature made possible thanks to GPS technology. The “find my” feature can help you pinpoint your phone if your lost or stolen phone has an active data or Wi-Fi connection and has its GPS location services enabled. Even if the phone gets powered down or loses connection, it can guide you to its last known location.
Setting up this feature is easy. Apple offers a comprehensive web page on how to enable and use their “Find My” feature for phones (and other devices too). Android users can get a step-by-step walkthrough on Google’s Android support page as well.
Back up your stuff in the cloud.
Thanks to cloud storage, you might be able to recover your photos, files, apps, notes, contact info, and more if your phone is lost or stolen. Android owners can learn how to set up cloud backup with Google Drive here, and iPhone users can learn the same for iCloud here.
Write down your phone’s unique ID number.
Here are a couple of acronyms. IMEI (International Mobile Equipment Identity) or MEID (Mobile Equipment Identifier) are two types of unique ID numbers assigned to smartphones. Find yours and write it down. In case of loss or theft, your mobile carrier, police department, or insurance provider might ask for the info to assist in its return or reimbursement for loss.
Beyond digital security measures, plenty of loss and theft prevention falls on you. Treat your phone like the desirable item it is. That’s a big step when it comes to preventing theft.
Keep your phone close.
And by close, we mean on your person. It’s easy to leave your phone on the table at a coffee shop, on a desk in a shared workspace, or on a counter when you’re shopping. Thieves might jump on any of these opportunities for a quick snatch-and-grab. You’re better off with your phone in your pocket or zipped up in a bag that you keep close.
Secure your bags and the devices you carry in them.
Enterprising thieves will find a way. They’ll snatch your bag while you’re not looking. Or they might even slice into it with a knife to get what’s inside, like your phone.
Keep your bag or backpack close. If you’re stopping to grab a bite to eat, sling the handles through a chair leg. If you have a strong metal carabiner, you can use that too. Securing your bag like that can make it much tougher for a thief to walk by and swipe it. For extra security, look into a slash-resistant bag.
If you have a credit card and ID holder attached to the back of your phone, you might want to remove your cards from it. That way, if your phone gets snatched, those important cards won’t get snatched as well.
In the event of your phone getting lost or stolen, a combination of device tracking, device locking, and remote erasing can help protect your phone and the data on it.
Different device manufacturers have different ways of going about it. But the result is the same — you can prevent others from using your phone, and even erase it if you’re truly worried that it’s in the wrong hands or gone for good. Apple provides iOS users with a step-by-step guide, and Google offers up a guide for Android users as well.
Apple’s Find My app takes things a step further. Beyond locating a lost phone or wiping it, Find My can also mark the item as lost, notify you if you’ve left it behind, or trigger a sound to help you locate it. (A huge boon in that couch cushion scenario!) Drop by Apple’s page dedicated to the Find My app for more details on what you can do on what devices, along with instructions how.
With preparation and prevention, you can give yourself reassurance if your phone gets lost or stolen. You have plenty of recovery options, in addition to plenty of ways to prevent bad actors from getting their hands on the sensitive info you keep on it.
[i] https://www.mcafee.com/content/dam/consumer/en-us/docs/reports/rp-connected-family-study-2022-global.pdf
[ii] https://arxiv.org/abs/1709.04959
The post What Should I do If My Phone Gets Stolen or Lost? appeared first on McAfee Blog.
In today’s interconnected world, our mobile devices serve as essential tools for communication, productivity, and entertainment. However, for some tech-savvy users, the allure of unlocking the full potential of their devices through jailbreaking (for iOS) or rooting (for Android) can be tempting. While these processes offer users greater control and customization over their devices, they also raise significant questions about security implications.
To “jailbreak” means to allow the phone’s owner to gain full access to the root of the operating system and access all the features. Jailbreaking is the process of removing the limitations imposed by Apple and associated carriers on devices running the iOS operating system. Jailbroken phones came into the mainstream when Apple first released their iPhone and it was only on AT&T’s network. Users who wanted to use an iPhone with other carriers were not able to unless they had a jailbroken iPhone.
Similar to jailbreaking, “rooting” is the term for the process of removing the limitations on a mobile or tablet running the Android operating system. By gaining privileged control, often referred to as “root access,” over an Android device’s operating system, users can modify system files, remove pre-installed bloatware, install custom ROMs, and unlock features not accessible on stock devices.
Rooting or jailbreaking grants users deeper access to the device’s operating system, allowing for extensive customization of the user interface, system settings, and even hardware functionality. Advanced users can optimize system performance, remove unnecessary bloatware, and tweak settings to improve battery life, speed, and responsiveness.
However, hacking your device potentially opens security holes that may have not been readily apparent or undermines the device’s built-in security measures. Jailbroken and rooted phones are much more susceptible to viruses and malware because users can avoid Apple and Google application vetting processes that help ensure users are downloading virus-free apps.
In addition to security vulnerabilities, hacking your device may lead to a voided manufacturer’s warranty, leaving you without official support for repairs or replacements. Altering the device’s operating system can also lead to instability, crashes, and performance issues, especially if incompatible software or modifications are installed.
While rooting or jailbreaking may offer users enticing opportunities for customization and optimization of their mobile devices, the associated risks cannot be overlooked. By circumventing built-in security measures, users expose their devices to potential security vulnerabilities, making them more susceptible to viruses and malware. Ultimately, the decision to root or jailbreak a mobile device should be made with careful consideration of the trade-offs involved, as the security risks often outweigh the benefits.
When thinking about mobile security risks, consider adding reputable mobile security software to your device to augment the built-in security measures. These security solutions provide real-time scanning and threat detection capabilities, helping to safeguard sensitive data and maintain the integrity of the device’s operating system.
The post How Does Jailbreaking Or Rooting Affect My Mobile Device Security? appeared first on McAfee Blog.
“Vishing” occurs when criminals cold-call victims and attempt to persuade them to divulge personal information over the phone. These scammers are generally after credit card numbers and personal identifying information, which can then be used to commit financial theft. Vishing can occur both on your landline phone or via your cell phone.
The term is a combination of “voice,” and “phishing,” which is the use of spoofed emails to trick targets into clicking malicious links. Rather than email, vishing generally relies on automated phone calls that instruct targets to provide account numbers. Techniques scammers use to get your phone numbers include:
Once vishers have phone numbers, they employ various strategies to deceive their targets and obtain valuable personal information:
To protect yourself from vishing scams, you should:
Staying vigilant and informed is your best defense against vishing scams. By verifying caller identities, being skeptical of unsolicited requests for personal information, and using call-blocking tools, you can significantly reduce your risk of falling victim to these deceptive practices. Additionally, investing in identity theft protection services can provide an extra layer of security. These services monitor your personal information for suspicious activity and offer assistance in recovering from identity theft, giving you peace of mind in an increasingly digital world. Remember, proactive measures and awareness are key to safeguarding your personal information against vishing threats.
The post How to Protect Yourself from Vishing appeared first on McAfee Blog.
My mother recently turned 80, so of course a large celebration was in order. With 100 plus guests, entertainment, and catering to organise, the best way for me to keep everyone updated (and share tasks) was to use Google Docs. Gee, it worked well. My updates could immediately be seen by everyone, the family could access it from all the devices, and it was free to use! No wonder Google has a monopoly on drive and document sharing.
But here’s the thing – hackers know just how much both individuals and businesses have embraced Google products. So, it makes complete sense that they use reputable companies such as Google to devise phishing emails that are designed to extract our personal information. In fact, the Google Docs phishing scam was widely regarded as one of the most successful personal data extraction scams to date. They know that billions of people worldwide use Google so an invitation to click a link and view a document does not seem like an unreasonable email to receive. But it caused so much grief for so many people.
Emails designed to trick you into sharing your personal information are a scammer’s bread and butter. This is essentially what phishing is. It is by far the most successful tool they use to get their hands on your personal data and access your email.
‘But why do they want my email logins?’ – I hear you ask. Well, email accounts are what every scammer dreams of – they are a treasure trove of personally identifiable material that they can either steal or exploit. They could also use your email to launch a wide range of malicious activities from spamming and spoofing to spear phishing. Complicated terms, I know but in essence these are different types of phishing strategies. So, you can see why they are keen!!
But successful phishing emails usually share a few criteria which is important to know. Firstly, the email looks like it has been sent from a legitimate company e.g. Microsoft, Amex, or Google. Secondly, the email has a strong ‘call to action’ e.g. ‘your password has been changed, if this is not the case, please click here’. And thirdly, the email does not seem too out of place or random from the potential victim’s perspective.
Despite the fact that scammers are savvy tricksters, there are steps you can take to maximise the chances your email remains locked away from their prying eyes. Here’s what I suggest:
Never respond to an unexpected email or website that asks you for personal information or your login details no matter how professional it looks. If you have any doubts, always contact the company directly to verify.
Make sure you have super-duper internet security software that includes all the bells and whistles. Not only does internet security software McAfee+ include protection for daily browsing but it also has a password manager, a VPN, and a social privacy manager that will lock down your privacy settings on your social media accounts. A complete no-brainer!
Avoid using public Wi-Fi to log into your email from public places. It takes very little effort for a hacker to position themselves between you and the connection point. So, it’s entirely possible for them to be in receipt of all your private information and logins which clearly you don’t want. If you really need to use it, invest in a Virtual Private Network (VPN) which will ensure everything you share via Wi-Fi will be encrypted. Your McAfee+ subscription includes a VPN.
Public computers should also be avoided even just to ‘check your email’. Not only is there a greater chance of spyware on untrusted computers but some of them sport key-logging programs which can both monitor and record the keys you strike on the keyboard – a great way of finding out your password!
Ensuring each of your online accounts has its own unique, strong, and complex password is one of the best ways of keeping hackers out of your life. I always suggest at least 10-12 characters with a combination of upper and lower case letters, symbols, and numbers. A crazy nonsensical sentence is a great option here but better still is a password manager that will remember and generate passwords that no human could! A password manager is also part of your McAfee+ online security pack.
Even if you have taken all the necessary steps to protect your email from hackers, there is the chance that your email logins may be leaked in a data breach. A data breach happens when a company’s data is accessed by scammers and customers’ personal information is stolen. You may remember the Optus, Medibank and Latitude hacks of 2022/23?
If you have had your personal information stolen, please be assured that there are steps you can take to remedy this. The key is to act fast. Check out my recent blog post here for everything you need to know.
So, next time you’re organising a big gathering don’t hesitate to use Google Docs to plan or Microsoft Teams to host your planning meetings. While the thought of being hacked might make you want to withdraw, please don’t. Instead, cultivate a questioning mindset in both yourself and your kids, and always have a healthy amount of suspicion when going about your online life. You’ve got this!!
Till next time,
Stay safe!
Alex
The post How To Prevent Your Emails From Being Hacked appeared first on McAfee Blog.
I think I could count on my hand the people I know who have NOT had their email hacked. Maybe they found a four-leaf clover when they were kids!
Email hacking is one of the very unfortunate downsides of living in our connected, digital world. And it usually occurs as a result of a data breach – a situation that even the savviest tech experts find themselves in.
In simple terms, a data breach happens when personal information is accessed, disclosed without permission, or lost. Companies, organisations, and government departments of any size can be affected. Data stolen can include customer login details (email addresses and passwords), credit card numbers, identifying IDs of customers e.g. driver’s license numbers and/or passport numbers, confidential customer information, company strategy, or even matters of national security.
Data breaches have made headlines, particularly over the last few years. When the Optus and Medibank data breaches hit the news in 2022 affecting almost 10 million Aussies a piece, we were all shaken. But then when Aussie finance company Latitude, was affected in 2023 with a whopping 14 million people from both Australia and New Zealand affected, it almost felt inevitable that by now, most of us would have been impacted.
But these were the data breaches that grabbed our attention. The reality is that data breaches have been happening for years. In fact, the largest data breach in Australian history actually happened in May 2019 to the online design site Canva which affected 137 million users globally including many Aussies.
So, in short – it can happen to anyone, and the chances are you may have already been affected.
The sole objective of a hacker is to get their hands on your data. And any information that you share in your email account can be very valuable to them. But why do they want your data, you ask? It’s simple really – so they can cash in! Some will keep the juicy stuff for themselves – passwords or logins to government departments or large companies they may want to ’target’ with the aim of extracting valuable data and/or funds. But the more sophisticated ones will sell your details including name, telephone, email address, and credit card details, and cash in on the Dark Web. They often do this in batches. Some experts believe they can get as much as AU$250 for a full set of details including credit cards. So, you can see why they’d be interested in you!
The other reason why hackers will be interested in your email address and password is that many of us re-use these login details across our other online accounts too. So, once they’ve got their hands on your email credentials then they may be able to access your online banking and investment accounts – the possibilities are endless if you are using the same login credentials everywhere. So, you can see why I harp on about using a unique password for every online account!
There is a plethora of statistics on just how big this issue is – all of them concerning.
According to the Australian Institute of Criminology, there were over 16,000 reports of identity theft in 2022.
The Department of Home Affairs and Stay Smart Australia reports that cybercrime costs Australian businesses $29 billion a year with the average business spending around $275,000 to remedy a data breach
And although there has been a slight reduction in Aussies falling for phishing scams in recent years (down from 2.7% in 2020/1 to 2.5% in 2022/3), more Australians are falling victim to card fraud scams with a total of $2.2 billion lost in 2023.
But regardless of which statistic you choose to focus on, we have a big issue on our hands!
If you find yourself a victim of email hacking there are a few very important steps you need to take and the key is to take them FAST!!
This is the very first thing you must do to ensure the hacker can’t get back into your account. It is essential that your new password is complex and totally unrelated to previous passwords. Always use at least 8-10 characters with a variety of upper and lower case and throw in some symbols and numbers. I really like the idea of a crazy, nonsensical sentence – easier to remember and harder to crack! But, better still, get yourself a password manager that will create a password that no human would be capable of creating.
If you find the hacker has locked you out of your account by changing your password, you will need to reset the password by clicking on the ‘Forgot My Password’ link.
This is time-consuming but essential. Ensure you change any other accounts that use the same username and password as your compromised email. Hackers love the fact that many people still use the same logins for multiple accounts, so it is guaranteed they will try your info in other email applications and sites such as PayPal, Amazon, Netflix – you name it!
Once the dust has settled, please review your password strategy for all your online accounts. A best practice is to ensure every online account has its own unique and complex password.
A big part of the hacker’s strategy is to ‘get their claws’ into your address book with the aim of hooking others as well. Send a message to all your email contacts as soon as possible so they know to avoid opening any emails (most likely loaded with malware) that have come from you.
Yes, multi-factor authentication (or 2-factor authentication) adds another step to your login but it also adds another layer of protection. Enabling this will mean that in addition to your password, you will need a special one-time use code to log in. This can be sent to your mobile phone or alternatively, it may be generated via an authenticator app. So worthwhile!
It is not uncommon for hackers to modify your email settings so that a copy of every email you receive is automatically forwarded to them. Not only can they monitor your logins for other sites, but they’ll keep a watchful eye over any particularly juicy personal information. So, check your mail forwarding settings to ensure no unexpected email addresses have been added.
Don’t forget to check your email signature to ensure nothing spammy has been added. Also, ensure your ‘reply to’ email address is actually yours! Hackers have been known to create an email address here that looks similar to yours – when someone replies, it goes straight to their account, not yours!
This is essential also. If you find anything, please ensure it is addressed, and then change your email password again. And if you don’t have it – please invest. Comprehensive security software will provide you with a digital shield for your online life. McAfee+ lets you protect all your devices – including your smartphone – from viruses and malware. It also contains a password manager to help you remember and generate unique passwords for all your accounts.
If you have been hacked several times and your email provider isn’t mitigating the amount of spam you are receiving, then consider starting afresh but don’t delete your email address. Many experts warn against deleting email accounts as most email providers will recycle your old email address. This could mean a hacker could spam every site they can find with a ‘forgot my password’ request and try to impersonate you – identity theft!
Your email is an important part of your online identity so being vigilant and addressing any fallout from hacking is essential for your digital reputation. And even though it may feel that ‘getting hacked’ is inevitable, you can definitely reduce your risk by installing some good quality security software on all your devices. Comprehensive security software such as McAfee+ will alert you when visiting risky websites, warn you when a download looks ‘dodgy’, and will block annoying and dangerous emails with anti-spam technology.
It makes sense really – if you don’t receive the ‘dodgy’ phishing email – you can’t click on it! Smart!
And finally, don’t forget that hackers love social media – particularly those of us who overshare on it. So, before you post details of your adorable new kitten, remember it may just provide the perfect clue for a hacker trying to guess your email password!
Till next time
Alex
The post What to Do If Your Email Is Hacked appeared first on McAfee Blog.
You consider yourself a responsible person when it comes to taking care of your physical possessions. You’ve never left your wallet in a taxi or lost an expensive ring down the drain. You never let your smartphone out of your sight, yet one day you notice it’s acting oddly.
Did you know that your device can fall into cybercriminals’ hands without ever leaving yours? SIM swapping is a method that allows criminals to take control of your smartphone and break into your online accounts.
Don’t worry: there are a few easy steps you can take to safeguard your smartphone from prying eyes and get back to using your devices confidently.
First off, what exactly is a SIM card? SIM stands for subscriber identity module, and it is a memory chip that makes your phone truly yours. It stores your phone plan and phone number, as well as all your photos, texts, contacts, and apps. In most cases, you can pop your SIM card out of an old phone and into a new one to transfer your photos, apps, etc.
Unlike what the name suggests, SIM swapping doesn’t require a cybercriminal to get access to your physical phone and steal your SIM card. SIM swapping can happen remotely. A hacker, with a few important details about your life in hand, can answer security questions correctly, impersonate you, and convince your mobile carrier to reassign your phone number to a new SIM card. At that point, the criminal can get access to your phone’s data and start changing your account passwords to lock you out of your online banking profile, email, and more.
SIM swapping was especially relevant right after the AT&T data leak. Cybercriminals stole millions of phone numbers and the users’ associated personal details. They could later use these details to SIM swap, allowing them to receive users’ text or email two-factor authentication codes and gain access to their personal accounts.
The most glaring sign that your phone number was reassigned to a new SIM card is that your current phone no longer connects to the cell network. That means you won’t be able to make calls, send texts, or surf the internet when you’re not connected to Wi-Fi. Since most people use their smartphones every day, you’ll likely find out quickly that your phone isn’t functioning as it should.
Additionally, when a SIM card is no longer active, the carrier will often send a notification text. If you receive one of these texts but didn’t deactivate your SIM card, use someone else’s phone or landline to contact your wireless provider.
Check out these tips to keep your device and personal information safe from SIM swapping.
With just a few simple steps, you can feel better about the security of your smartphone, cellphone number, and online accounts. If you’d like extra peace of mind, consider signing up for an identity theft protection service like McAfee+. McAfee, on average, detects suspicious activity ten months earlier than similar monitoring services. Time is of the essence in cases of SIM swapping and other identity theft schemes. An identity protection partner can restore your confidence in your online activities.
The post How to Protect Your Smartphone from SIM Swapping appeared first on McAfee Blog.
It’s that time of year again – tax season! Whether you’ve already filed in the hopes of an early refund or have yet to start the process, one thing is for sure: cybercriminals will certainly use tax season as a means to get victims to give up their personal and financial information. This time of year is advantageous for malicious actors since the IRS and tax preparers are some of the few people who actually need your personal data. As a result, consumers are targeted with various scams impersonating trusted sources like the IRS or DIY tax software companies. Fortunately, every year the IRS outlines the most prevalent tax scams, such as voice phishing, email phishing, and fake tax software scams. Let’s explore the details of these threats.
So, how do cybercriminals use voice phishing to impersonate the IRS? Voice phishing, a form of criminal phone fraud, uses social engineering tactics to gain access to victims’ personal and financial information. For tax scams, criminals will make unsolicited calls posing as the IRS and leave voicemails requesting an immediate callback. The crooks will then demand that the victim pay a phony tax bill in the form of a wire transfer, prepaid debit card or gift card. In one case outlined by Forbes, victims received emails in their inbox that allegedly contained voicemails from the IRS. The emails didn’t actually contain any voicemails but instead directed victims to a suspicious SharePoint URL. Last year, a number of SharePoint phishing scams occurred as an attempt to steal Office 365 credentials, so it’s not surprising that cybercriminals are using this technique to access taxpayers’ personal data now as well.
In addition to voice phishing schemes, malicious actors are also using email to try and get consumers to give up their personal and financial information. This year alone, almost 400 IRS phishing URLs have been reported. In a typical email phishing scheme, scammers try to obtain personal tax information like usernames and passwords by using spoofed email addresses and stolen logos. In many cases, the emails contain suspicious hyperlinks that redirect users to a fake site or PDF attachments that may download malware or viruses. If a victim clicks on these malicious links or attachments, they can seriously endanger their tax data by giving identity thieves the opportunity to steal their refund. What’s more, cybercriminals are also using subject lines like “IRS Important Notice” and “IRS Taxpayer Notice” and demanding payment or threatening to seize the victim’s tax refund.
Cybercriminals are even going so far as to impersonate trusted brands like TurboTax for their scams. In this case, DIY tax preparers who search for TurboTax software on Google are shown ads for pirated versions of TurboTax. The victims will pay a fee for the software via PayPal, only to have their computer infected with malware after downloading the software. You may be wondering, how do victims happen upon this malicious software through a simple Google search? Unfortunately, scammers have been paying to have their spoofed sites show up in search results, increasing the chances that an innocent taxpayer will fall victim to their scheme.
Money is a prime motivator for many consumers, and malicious actors are fully prepared to exploit this. Many people are concerned about how much they might owe or are predicting how much they’ll get back on their tax refund, and scammers play to both of these emotions. So, as hundreds of taxpayers are waiting for a potential tax return, it’s important that they navigate tax season wisely. Check out the following tips to avoid being spoofed by cybercriminals and identity thieves:
File before cybercriminals do it for you. The easiest defense you can take against tax season schemes is to get your hands on your W-2 and file as soon as possible. The more prompt you are to file, the less likely your data will be raked in by a cybercriminal.
Keep an eye on your credit and your identity. Keeping tabs on your credit report and knowing if your personal information has been compromised in some way can help prevent tax fraud. Together, they can let you know if someone has stolen your identity or if you have personal info on the dark web that could lead to identity theft.
Watch out for spoofed websites. Scammers have extremely sophisticated tools that help disguise phony web addresses for DIY tax software, such as stolen company logos and site designs. To avoid falling for this, go directly to the source. Type the address of a website directly into the address bar of your browser instead of following a link from an email or internet search. If you receive any suspicious links in your email, investigating the domain is usually a good way to tell if the source is legitimate or not.
Protect yourself from scam messages. Scammers also send links to scam sites via texts, social media messages, and email. Text Scam Detector can help you spot if the message you got is a fake. It uses AI technology that automatically detects links to scam URLs. If you accidentally click, don’t worry, it can block risky sites if you do.
Clean up your personal info online. Crooks and scammers have to find you before they can contact you. After all, they need to get your phone number or email from somewhere. Sometimes, that’s from “people finder” and online data brokers that gather and sell personal info to any buyer. Including crooks. McAfee Personal Data Cleanup can remove your personal info from the data broker sites scammers use to contact their victims.
Consider an identity theft protection solution. If for some reason your personal data does become compromised, be sure to use an identity theft solution such as McAfee Identity Theft Protection, which allows users to take a proactive approach to protect their identities with personal and financial monitoring and recovery tools to help keep their identities personal and secured.
The post How to Steer Clear of Tax Season Scams appeared first on McAfee Blog.
There are more online users now than ever before, thanks to the availability of network-capable devices and online services. The internet population in Canada is the highest it has been, topping the charts at 33 million. That number is only expected to increase through the upcoming years. However, this growing number and continued adoption of online services pose increasing cybersecurity risks as cybercriminals take advantage of more online users and exploit vulnerabilities in online infrastructure. This is why we need AI-backed software to provide advanced protection for online users.
The nature of these online threats is ever-changing, making it difficult for legacy threat detection systems to monitor threat behavior and detect new malicious code. Fortunately, threat detection systems such as McAfee+ adapt to incorporate the latest threat intelligence and artificial intelligence (AI) driven behavioral analysis. Here’s how AI impacts cybersecurity to go beyond traditional methods to protect online users.
Most of today’s antivirus and threat detection software leverages behavioral heuristic-based detection based on machine learning models to detect known malicious behavior. Traditional methods rely on data analytics to detect known threat signatures or footprints with incredible accuracy. However, these conventional methods do not account for new malicious code, otherwise known as zero-day malware, for which there is no known information available. AI is mission-critical to cybersecurity since it enables security software and providers to take a more intelligent approach to virus and malware detection. Unlike AI–backed software, traditional methods rely solely on signature-based software and data analytics.
Similar to human-like reasoning, machine learning models follow a three-stage process to gather input, process it, and generate an output in the form of threat leads. Threat detection software can gather information from threat intelligence to understand known malware using these models. It then processes this data, stores it, and uses it to draw inferences and make decisions and predictions. Behavioral heuristic-based detection leverages multiple facets of machine learning, one of which is deep learning.
Deep learning employs neural networks to emulate the function of neurons in the human brain. This architecture uses validation algorithms for crosschecking data and complex mathematical equations, which applies an “if this, then that” approach to reasoning. It looks at what occurred in the past and analyzes current and predictive data to reach a conclusion. As the numerous layers in this framework process more data, the more accurate the prediction becomes.
Many antivirus and detection systems also use ensemble learning. This process takes a layered approach by applying multiple learning models to create one that is more robust and comprehensive. Ensemble learning can boost detection performance with fewer errors for a more accurate conclusion.
Additionally, today’s detection software leverages supervised learning techniques by taking a “learn by example” approach. This process strives to develop an algorithm by understanding the relationship between a given input and the desired output.
Machine learning is only a piece of an effective antivirus and threat detection framework. A proper framework combines new data types with machine learning and cognitive reasoning to develop a highly advanced analytical framework. This framework will allow for advanced threat detection, prevention, and remediation.
Online threats are increasing at a staggering pace. McAfee Labs observed an average of 588 malware threats per minute. These risks exist and are often exacerbated for several reasons, one of which is the complexity and connectivity of today’s world. Threat detection analysts are unable to detect new malware manually due to their high volume. However, AI can identify and categorize new malware based on malicious behavior before they get a chance to affect online users. AI–enabled software can also detect mutated malware that attempts to avoid detection by legacy antivirus systems.
Today, there are more interconnected devices and online usage ingrained into people’s everyday lives. However, the growing number of digital devices creates a broader attack surface. In other words, hackers will have a higher chance of infiltrating a device and those connected to it.
Additionally, mobile usage is putting online users at significant risk. Over 85% of the Canadian population owns a smartphone. Hackers are noticing the rising number of mobile users and are rapidly taking advantage of the fact to target users with mobile-specific malware.
The increased online connectivity through various devices also means that more information is being stored and processed online. Nowadays, more people are placing their data and privacy in the hands of corporations that have a critical responsibility to safeguard their users’ data. The fact of the matter is that not all companies can guarantee the safeguards required to uphold this promise, ultimately resulting in data and privacy breaches.
In response to these risks and the rising sophistication of the online landscape, security companies combine AI, threat intelligence, and data science to analyze and resolve new and complex cyber threats. AI-backed threat protection identifies and learns about new malware using machine learning models. This enables AI-backed antivirus software to protect online users more efficiently and reliably than ever before.
AI addresses numerous challenges posed by increasing malware complexity and volume, making it critical for online security and privacy protection. Here are the top 3 ways AI enhances cybersecurity to better protect online users.
1. Effective threat detection
The most significant difference between traditional signature-based threat detection methods and advanced AI-backed methods is the capability to detect zero-day malware. Functioning exclusively from either of these two methods will not result in an adequate level of protection. However, combining them results in a greater probability of detecting more threats with higher precision. Each method will ultimately play on the other’s strengths for a maximum level of protection.
2. Enhanced vulnerability management
AI enables threat detection software to think like a hacker. It can help software identify vulnerabilities that cybercriminals would typically exploit and flag them to the user. It also enables threat detection software to better pinpoint weaknesses in user devices before a threat has even occurred, unlike conventional methods. AI-backed security advances past traditional methods to better predict what a hacker would consider a vulnerability.
2. Better security recommendations
AI can help users understand the risks they face daily. An advanced threat detection software backed by AI can provide a more prescriptive solution to identifying risks and how to handle them. A better explanation results in a better understanding of the issue. As a result, users are more aware of how to mitigate the incident or vulnerability in the future.
AI and machine learning are only a piece of an effective threat detection framework. A proper threat detection framework combines new data types with the latest machine learning capabilities to develop a highly advanced analytical framework. This framework will allow for better threat cyber threat detection, prevention, and remediation.
The post The What, Why, and How of AI and Threat Detection appeared first on McAfee Blog.
Deep fakes are a growing concern in the age of digital media and can be extremely dangerous for school children. Deep fakes are digital images, videos, or audio recordings that have been manipulated to look or sound like someone else. They can be used to spread misinformation, create harassment, and even lead to identity theft. With the prevalence of digital media, it’s important to protect school children from deep fakes.
Educating students on deep fakes is an essential step in protecting them from the dangers of these digital manipulations. Schools should provide students with information about the different types of deep fakes and how to spot them.
Media literacy is an important skill that students should have in order to identify deep fakes and other forms of misinformation. Schools should provide students with resources to help them understand how to evaluate the accuracy of a digital image or video.
Schools should emphasize the importance of digital safety and provide students with resources on how to protect their online identities. This includes teaching students about the risks of sharing personal information online, using strong passwords, and being aware of phishing scams.
Schools should monitor online activity to ensure that students are not exposed to deep fakes or other forms of online harassment. Schools should have policies in place to protect students from online bullying and harassment, and they should take appropriate action if they find any suspicious activity.
By following these tips, schools can help protect their students from the dangers of deep fakes. Educating students on deep fakes, encouraging them to be media literate, promoting digital safety, and monitoring online activity are all important steps to ensure that school children are safe online.
Through quipping students with the tools they need to navigate the online world, schools can also help them learn how to use digital technology responsibly. Through educational resources and programs, schools can teach students the importance of digital citizenship and how to use digital technology ethically and safely. Finally, schools should promote collaboration and communication between parents, students, and school administration to ensure everyone is aware of the risks of deep fakes and other forms of online deception.
Deep fakes have the potential to lead to identity theft, particularly if deep fakes tools are used to steal the identities of students or even teachers. McAfee’s Identity Monitoring Service, as part of McAfee+, monitors the dark web for your personal info, including email, government IDs, credit card and bank account info, and more. We’ll help keep your personal info safe, with early alerts if your data is found on the dark web, so you can take action to secure your accounts before they’re used for identity theft.
The post How to Protect School Children From Deep Fakes appeared first on McAfee Blog.
With the rise of artificial intelligence (AI) and machine learning, concerns about the privacy of personal data have reached an all-time high. Generative AI is a type of AI that can generate new data from existing data, such as images, videos, and text. This technology can be used for a variety of purposes, from facial recognition to creating “deepfakes” and manipulating public opinion. As a result, it’s important to be aware of the potential risks that generative AI poses to your privacy.
In this blog post, we’ll discuss how to protect your privacy from generative AI.
Generative AI is a type of AI that uses existing data to generate new data. It’s usually used for things like facial recognition, speech recognition, and image and video generation. This technology can be used for both good and bad purposes, so it’s important to understand how it works and the potential risks it poses to your privacy.
Generative AI can be used to create deepfakes, which are fake images or videos that are generated using existing data. This technology can be used for malicious purposes, such as manipulating public opinion, identity theft, and spreading false information. It’s important to be aware of the potential risks that generative AI poses to your privacy.
Generative AI uses existing data to generate new data, so it’s important to be aware of what data you’re sharing online. Be sure to only share data that you’re comfortable with and be sure to use strong passwords and two-factor authentication whenever possible.
There are a number of privacy-focused tools available that can help protect your data from generative AI. These include tools like privacy-focused browsers, VPNs, and encryption tools. It’s important to understand how these tools work and how they can help protect your data.
It’s important to stay up-to-date on the latest developments in generative AI and privacy. Follow trusted news sources and keep an eye out for changes in the law that could affect your privacy.
By following these tips, you can help protect your privacy from generative AI. It’s important to be aware of the potential risks that this technology poses and to take steps to protect yourself and your data.
Of course, the most important step is to be aware and informed. Research and organizations that are using generative AI and make sure you understand how they use your data. Be sure to read the terms and conditions of any contracts you sign and be aware of any third parties that may have access to your data. Additionally, be sure to look out for notifications of changes in privacy policies and take the time to understand any changes that could affect you.
Finally, make sure to regularly check your accounts and reports to make sure that your data is not being used without your consent. You can also take the extra step of making use of the security and privacy features available on your device. Taking the time to understand which settings are available, as well as what data is being collected and used, can help you protect your privacy and keep your data safe.
This blog post was co-written with artificial intelligence (AI) as a tool to supplement, enhance, and make suggestions. While AI may assist in the creative and editing process, the thoughts, ideas, opinions, and the finished product are entirely human and original to their author. We strive to ensure accuracy and relevance, but please be aware that AI-generated content may not always fully represent the intent or expertise of human-authored material.
The post How to Protect Your Privacy From Generative AI appeared first on McAfee Blog.
AI scams are becoming increasingly common. With the rise of artificial intelligence and technology, fraudulent activity is becoming more sophisticated and sophisticated. As a result, it is becoming increasingly important for families to be aware of the dangers posed by AI scams and to take steps to protect themselves.
By taking these steps, you can help protect your family from AI scams. Educating yourself and your family about the potential risks of AI scams, monitoring your family’s online activity, using strong passwords, installing anti-virus software, and checking your credit report regularly can help keep your family safe from AI scams.
No one likes to be taken advantage of or scammed. By being aware of the potential risks of AI scams, you protect your family from becoming victims.
In addition, it is important to be aware of emails or texts that appear to be from legitimate sources but are actually attempts to entice you to click on suspicious links or provide personal information. If you receive a suspicious email or text, delete it immediately. If you are unsure, contact the company directly to verify that the message is legitimate. By being aware of potential AI scams keep your family safe from financial loss or identity theft.
You can also take additional steps to protect yourself and your family from AI scams. Consider using two-factor authentication when logging in to websites or apps, and keep all passwords and usernames secure. Be skeptical of unsolicited emails or texts never provide confidential information unless you are sure you know who you are dealing with. Finally, always consider the source and research any unfamiliar company or service before you provide any personal information. By taking these steps, you can help to protect yourself and your family from the dangers posed by AI scams.
monitor your bank accounts and credit reports to ensure that no unauthorized activity is taking place. Set up notifications to alert you of any changes or suspicious activity. Make sure to update your security software to the latest version and be aware of phishing attempts, which could be attempts to gain access to your personal information. If you receive a suspicious email or text, do not click on any links and delete the message immediately.
Finally, stay informed and know the signs of scam. Be your online accounts and look out for any requests for personal information. If something looks suspicious, trust your instincts and don’t provide any information. Report any suspicious activity to the authorities and make sure to spread the word to others from falling victim to AI scams.
This blog post was co-written with artifical intelligence (AI) as a tool to supplement, enhance, and make suggestions. While AI may assist in the creative and editing process, the thoughts, ideas, opinions, and the finished product are entirely human and original to their author. We strive to ensure accuracy and relevance, but please be aware that AI-generated content may not always fully represent the intent or expertise of human-authored material.
The post How to Protect Your Family From AI Scams appeared first on McAfee Blog.
When we come across the term Artificial Intelligence (AI), our mind often ventures into the realm of sci-fi movies like I, Robot, Matrix, and Ex Machina. We’ve always perceived AI as a futuristic concept, something that’s happening in a galaxy far, far away. However, AI is not only here in our present but has also been a part of our lives for several years in the form of various technological devices and applications.
In our day-to-day lives, we use AI in many instances without even realizing it. AI has permeated into our homes, our workplaces, and is at our fingertips through our smartphones. From cell phones with built-in smart assistants to home assistants that carry out voice commands, from social networks that determine what content we see to music apps that curate playlists based on our preferences, AI has its footprints everywhere. Therefore, it’s integral to not only embrace the wows of this impressive technology but also understand and discuss the potential risks associated with it.
→ Dig Deeper: Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam
AI, a term that might sound intimidating to many, is not so when we understand it. It is essentially technology that can be programmed to achieve certain goals without assistance. In simple words, it’s a computer’s ability to predict, process data, evaluate it, and take necessary action. This smart way of performing tasks is being implemented in education, business, manufacturing, retail, transportation, and almost every other industry and cultural sector you can think of.
AI has been doing a lot of good too. For instance, Instagram, the second most popular social network, is now deploying AI technology to detect and combat cyberbullying in both comments and photos. No doubt, AI is having a significant impact on everyday life and is poised to metamorphose the future landscape. However, alongside its benefits, AI has brought forward a set of new challenges and risks. From self-driving cars malfunctioning to potential jobs lost to AI robots, from fake videos and images to privacy breaches, the concerns are real and need timely discussions and preventive measures.
AI has made it easier for people to face-swap within images and videos, leading to “deep fake” videos that appear remarkably realistic and often go viral. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. While this displays the power of AI technology, it also brings to light the responsibility and critical thinking required when consuming and sharing online content.
→ Dig Deeper: The Future of Technology: AI, Deepfake, & Connected Devices
Yet another concern raised by AI is privacy breaches. The Cambridge Analytica/Facebook scandal of 2018, alleged to have used AI technology unethically to collect Facebook user data, serves as a reminder that our private (and public) information can be exploited for financial or political gain. Thus, it becomes crucial to discuss and take necessary steps like locking down privacy settings on social networks and being mindful of the information shared in the public feed, including reactions and comments on other content.
McAfee Pro Tip: Cybercriminals employ advanced methods to deceive individuals, propagating sensationalized fake news, creating deceptive catfish dating profiles, and orchestrating harmful impersonations. Recognizing sophisticated AI-generated content can pose a challenge, but certain indicators may signal that you’re encountering a dubious image or interacting with a perpetrator operating behind an AI-generated profile. Know the indicators.
With the advent of AI, cybercrime has found a new ally. As per McAfee’s Threats Prediction Report, AI technology might enable hackers to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activities. Moreover, AI-generated phishing emails are scamming people into unknowingly handing over sensitive data.
→ Dig Deeper: How to Keep Your Data Safe From the Latest Phishing Scam
Bogus emails are becoming highly personalized and can trick intelligent users into clicking malicious links. Given the sophistication of these AI-related scams, it is vital to constantly remind ourselves and our families to be cautious with every click, even those from known sources. The need to be alert and informed cannot be overstressed, especially in times when AI and cybercrime often seem to be two sides of the same coin.
As homes evolve to be smarter and synced with AI-powered Internet of Things (IoT) products, potential threats have proliferated. These threats are not limited to computers and smartphones but extend to AI-enabled devices such as voice-activated assistants. According to McAfee’s Threat Prediction Report, these IoT devices are particularly susceptible as points of entry for cybercriminals. Other devices at risk, as highlighted by security experts, include routers, and tablets.
This means we need to secure all our connected devices and home internet at its source – the network. Routers provided by your ISP (Internet Security Provider) are often less secure, so consider purchasing your own. As a primary step, ensure that all your devices are updated regularly. More importantly, change the default password on these devices and secure your primary network along with your guest network with strong passwords.
Having an open dialogue about AI and its implications is key to navigating through the intricacies of this technology. Parents need to have open discussions with kids about the positives and negatives of AI technology. When discussing fake videos and images, emphasize the importance of critical thinking before sharing any content online. Possibly, even introduce them to the desktop application FakeApp, which allows users to swap faces within images and videos seamlessly, leading to the production of deep fake photos and videos. These can appear remarkably realistic and often go viral.
Privacy is another critical area for discussion. After the Cambridge Analytica/Facebook scandal of 2018, the conversation about privacy breaches has become more significant. These incidents remind us how our private (and public) information can be misused for financial or political gain. Locking down privacy settings, being mindful of the information shared, and understanding the implications of reactions and comments are all topics worth discussing.
Awareness and knowledge are the best tools against AI-enabled cybercrime. Making families understand that bogus emails can now be highly personalized and can trick even the most tech-savvy users into clicking malicious links is essential. AI can generate phishing emails, scamming people into handing over sensitive data. In this context, constant reminders to be cautious with every click, even those from known sources, are necessary.
→ Dig Deeper: Malicious Websites – The Web is a Dangerous Place
The advent of AI has also likely allowed hackers to bypass security measures on networks undetected, leading to data breaches, malware attacks, and ransomware. Therefore, being alert and informed is more than just a precaution – it is a vital safety measure in the digital age.
Artificial Intelligence has indeed woven itself into our everyday lives, making things more convenient, efficient, and connected. However, with these advancements come potential risks and challenges. From privacy breaches, and fake content, to AI-enabled cybercrime, the concerns are real and need our full attention. By understanding AI better, having open discussions, and taking appropriate security measures, we can leverage this technology’s immense potential without falling prey to its risks. In our AI-driven world, being informed, aware, and proactive is the key to staying safe and secure.
To safeguard and fortify your online identity, we strongly recommend that you delve into the extensive array of protective features offered by McAfee+. This comprehensive cybersecurity solution is designed to provide you with a robust defense against a wide spectrum of digital threats, ranging from malware and phishing attacks to data breaches and identity theft.
The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blog.
As we continue to evolve technologically, so do cybercriminals in their never-ending quest to exploit vulnerabilities in our digital lives. The previous years have clearly shown that cybercriminals are increasingly leveraging new technologies and trends to trick their victims. As we move into another year, it’s crucial to be aware of the tried and tested tactics these cyber criminals use and stay prepared against potential threats.
In this article, we delve deeper into one such tactic that remains a favorite among cybercriminals – ‘phishing‘ via emails. We focus on the trickiest and most dangerous email subject lines that have been commonly used in worldwide phishing emails. Recognizing these ‘ baits’ can be your first step towards safeguarding your identity and valuables against cybercriminals. Beware, there are plenty of these ‘phishes’ in the sea, and it helps to be on your guard at all times.
Sending email messages filled with malicious links or infectious attachments remains a dominant strategy among cybercriminals. This strategy, commonly known as ‘phishing,’ is often disguised in a variety of forms. The term ‘Phishing’ is derived from the word ‘Fishing,’ and just like fishing, where bait is thrown in the hope that a fish will bite, phishing is a cyber trick where an email is the bait, and the unsuspecting user is the fish.
Today’s most common phishing scams found by McAfeerevealed that cybercriminals tend to use certain email subject lines more often. Although this does not mean that emails with other subject lines are not harmful, being aware of the most commonly used ones can give you an edge. The key takeaway here is to be vigilant and alert when it comes to all kinds of suspicious emails, not just those with specific subject lines.
Let’s take a look at the top five most commonly used subject lines in worldwide phishing emails. The list will give you an understanding of the varied strategies employed by cybercriminals. The strategies range from social networking invitations to ‘returned mail’ error messages and phony bank notifications. Be aware that these are just the tip of the iceberg and cyber criminals are continuously coming up with new and improved tactics to gain access to your sensitive data.
In the past, cybercriminals used to cast big, untargeted nets in the hopes of trapping as many victims as possible. However, recent trends indicate a shift towards more targeted and custom messages designed to ensnare more victims. A classic example of such a targeted phishing attack is the JP Morgan Chase phishing scam that took place earlier this year.
→ Dig Deeper: Mobile Bankers Beware: A New Phishing Scam Wants Your Money
The fact that phishing scams are still on the rise amplifies the importance of proactive measures to protect our digital assets. As technology advances, these threats continue to evolve, making ongoing vigilance, education, and caution in our online engagements critical in combating the increasing prevalence of such scams.
Phishing emails, often with a guise of urgency or familiarity, cunningly aim to deceive recipients into revealing sensitive information, most commonly, personal identities and financial credentials. These malicious messages are designed to prey on our trust and curiosity, making it crucial to scrutinize each email carefully. Cybercriminals behind phishing schemes are after the keys to both your digital identity and your wallet. They may seek login credentials, credit card details, social security numbers, and other sensitive data, which can lead to identity theft, financial loss, and even broader security breaches. It is essential to exercise caution and rely on best practices for email and internet security to thwart their efforts and safeguard your online presence.
While phishing emails come in a variety of forms, their ultimate goal remains the same: to steal your identity and money. As we move into the New Year, it’s prudent to add a few safety measures to your resolutions list. Protecting yourself from the increasingly sophisticated and customized phishing attacks requires more than awareness.
With an understanding of phishing techniques, the next step is learning how to protect yourself from falling prey to them. Ultimately, you are the first line of defense. If you’re vigilant, you can prevent cyber criminals from stealing your sensitive information. The following are some tips that can help you safeguard your digital life and assets:
First, avoid opening attachments or clicking on links from unknown senders. This is the primary method that cybercriminals use to install malware on your device. If you don’t recognize the sender of an email, or if something seems suspicious, don’t download the attachment or click on the link. Even if you do know the sender, be cautious if the email message seems odd or unexpected. Cybercriminals often hack into email accounts to send malicious links to the victim’s contacts.
Another important practice is to think twice before sharing personal information. If you’re asked for your name, address, banking information, password, or any other sensitive data on a website you accessed from an email, don’t supply this information, as it is likely a phishing attempt. In case of any doubts regarding the authenticity of a request for your information, contact the company directly using a phone number or web address you know to be correct.
Even with the most diligent practices, it’s still possible to fall victim to phishing attacks. Hence, having security nets in place is crucial. Start by being careful on social networks. Cybercriminals often hack into social media accounts and send out phishing links as the account owner. Even if a message appears to come from a friend, be cautious if it looks suspicious, especially if it contains only a link and no text.
Installing comprehensive security software is another essential step. McAfee LiveSafe service, for instance, offers full protection against malware and viruses on multiple devices. This software can be a lifeline if you happen to click a malicious link or download a hazardous attachment from an email.
It’s also a smart idea to regularly update your devices. Updates often contain patches for security vulnerabilities that have been discovered since the last iteration of the software. Cybercriminals are always looking for vulnerabilities to exploit, so keeping your software up-to-date is one of the most effective ways to protect yourself.
McAfee Pro Tip: Always update both your software and devices. First and foremost, software updates often include patches and fixes for vulnerabilities and weaknesses that cybercriminals can exploit. By staying up-to-date, you ensure that you have the latest defenses against evolving threats. Learn more about the importance of software updates.
Phishing attempts are a constant threat in the digital world, and their sophistication continues to evolve. Cybercriminals are relying more on tailored and targeted attacks to deceive their victims. The top five most dangerous email subject lines mentioned above are a clear indicator that criminals are becoming more nuanced in their attempts to trick victims. However, with awareness and vigilance, you can effectively avoid their traps.
Remember, your personal and financial information is valuable. Make sure to protect yourself from phishing attempts by avoiding suspicious links and attachments, thinking twice before sharing your personal information, being cautious on social media, installing comprehensive security software like McAfee+, and keeping all software up-to-date. Being prepared can make all the difference in keeping your digital life secure.
The post Top 5 Most Dangerous Email Subject Lines appeared first on McAfee Blog.
SMiShing, a term from ‘SMS phishing’, is a growing cyber threat that is as dangerous, if not more, than its sibling, “Phishing.” While the terms may seem comical, the repercussions of falling victim to these scams are no laughing matter. In an increasingly digital age, cybercriminals are taking advantage of our reliance on technology to steal personal information and leverage it for malicious purposes. This article provides an in-depth explanation of SMiShing, how it works, and, most importantly, how you can protect yourself from it.
In essence, SMiShing is a deceptive practice where scammers send fraudulent text messages masquerading as reputable institutions, aiming to dupe recipients into clicking on a link, calling a number, or providing sensitive personal information. The risk with SMiShing is that mobile users tend to trust their SMS messages more than their emails, making it an effective scamming tool. The best line of defense is awareness and understanding of what SMiShing is, how it operates, and the protective measures you can take against it.
The term ‘SMiShing’ is a concatenation of ‘SMS’ (short message service) and ‘Phishing’. The latter is a cybercriminal strategy, where scammers send emails that impersonate legitimate organizations with the aim of luring victims into clicking links and/or entering their login data or credentials. The word ‘Phishing’ is a play on the word ‘fishing’, depicting the tactic of baiting victims and fishing for their personal information.
SMiShing is a variant of phishing, a social engineering tactic where scammers resort to sending text messages instead of emails. These messages are engineered to appear as though they’ve been sent by legitimate, trusted organizations, leading the recipient to either click on a link or respond with their personal details. The transition from emails to text messages signals a shift in cybercrime trends, as scammers exploit the trust users place in their text messages, as opposed to their scrutiny of emails.
→ Dig Deeper: What Is Smishing and Vishing, and How Do You Protect Yourself?
Cybercriminals use sophisticated technology that allows them to generate cell phone numbers based on area codes. These phone numbers include a cell carrier’s provided extension, plus the last four random numbers. Once these phone numbers are generated, the scammers utilize mass text messaging services to disseminate their SMiShing bait, much like casting a large fishing net hoping to snare unsuspecting victims. A simple online search for “mass SMS software” will yield numerous free and low-cost programs that facilitate mass texting, revealing the ease with which these scams can be carried out.
→ Dig Deeper: What You Need to Know About the FedEx SMiShing Scam
SMiShing has proven to be effective mainly because most people have been conditioned to trust text messages more than emails. Moreover, unlike emails accessed on a PC, text messages do not allow for easy link previewing, making it risky to click on links embedded within the texts. The links either lead to malicious websites intended to steal data or prompt the download of keyloggers, tools that record every keystroke on your device, facilitating the theft of personal information. Alternatively, some SMiShing texts may trick recipients into calling specific numbers which, when dialed, incur hefty charges on the victim’s phone bill.
The first step towards protecting yourself against SMiShing is recognizing the threat. Cybercriminals often capitalize on the victim’s lack of understanding about how these scams work. They prey on the recipient’s trust in their text messages and their curiosity to view links sent via SMS. By understanding how SMiShing works, you are able to spot potential scams and protect yourself against them.
Typically, SMiShing messages are crafted to impersonate familiar, reputable organizations such as banks, utility companies, or even government institutions. They often induce a sense of urgency, pushing the recipient to act swiftly, leaving little to no time for scrutiny. The messages may alert you of suspicious activity on your account, a pending bill, or offer incredible deals that seem too good to be true. Any SMS message that prompts you to click on a link, call a certain number, or provide personal information should be treated with suspicion.
More often than not, recognizing an SMiShing scam relies on your observational skills and your ability to spot the tell-tale signs. One common red flag is poor grammar and spelling. Although this is not always the case, several SMiShing scams tend to have mistakes that professional communications from reputable institutions would not.
Another sign is that the message is unsolicited. If you didn’t initiate contact or expect a message from the supposed sender, you should treat it with suspicion. Additionally, reputable organizations usually employ a secure method of communication when dealing with sensitive information; they would rarely, if ever, ask for personal data via SMS.
Pay attention to the phone number. A text from a legitimate institution usually comes from a short code number, not a regular ten-digit phone number. Also, check whether the message uses a generic greeting instead of your name. Finally, use your common sense. If an offer seems too good to be true, it probably is. Also, remember that verifying the legitimacy of the text message with the supposed sender can never harm.
Many of these signs can be subtle and easy to overlook. However, staying vigilant and taking the time to scrutinize unusual text messages can save you from falling victim to SMiShing.
→ Dig Deeper: How to Squash the Android/TimpDoor SMiShing Scam
Psychological Manipulation is a critical aspect of this cyber threat, involving the art of exploiting human psychology and trust to trick individuals into revealing sensitive information or engaging in harmful actions. Even individuals with the intelligence to steer clear of scams might become vulnerable if the psychological manipulation is exceptionally compelling.
Smishing attackers employ a range of social engineering techniques that tap into human emotions, including fear, curiosity, and urgency. They often impersonate trusted entities or use personalized information to lower recipients’ guard and establish trust. The use of emotional manipulation and emotional triggers, such as excitement or outrage, further intensifies the impact of these attacks. Recognizing and understanding these psychological tactics is paramount for individuals and organizations in fortifying their defenses against smishing, empowering them to identify and resist such manipulative attempts effectively.
→ Dig Deeper: Social Engineering—The Scammer’s Secret Weapon
Arming yourself with knowledge about SMiShing and its modus operandi is the initial line of defense. Once you comprehend the nature of this scam, you are better equipped to identify it. However, understanding alone is not enough. There are several practical measures that you can adopt to safeguard your personal information from SMiShing scams.
At the top of this list is exercising caution with text messages, especially those from unknown sources. Resist the impulse to click on links embedded within these texts. These links often lead to malicious websites engineered to steal your data or trigger the download of harmful software like keyloggers. Do not respond to text messages that solicit personal information. Even if the message seems to originate from a trusted entity, it is always better to verify through other means before responding.
Furthermore, be wary of text messages that create a sense of urgency or evoke fear. SMiShers often manipulate emotions to spur immediate action, bypassing logical scrutiny. For instance, you may receive a message supposedly from your bank alerting you about a security breach or unauthorized transaction. Instead of panicking and clicking on the provided link, take a moment to contact your bank through their officially listed number for clarification.
There is also the option of using comprehensive mobile security applications. These apps provide an array of features such as text message filtering, antivirus, web protection, and anti-theft measures. Applications like McAfee Mobile Security can significantly enhance your defense against SMiShing attacks and other cyber threats.
McAfee Pro Tip: Try McAfee Mobile Security’s scam protection. It scans the URLs within your text messages to enhance your online safety. If a suspicious or scam link is detected, it will send an alert on Android devices or automatically filter out the problematic text. Additionally, it actively blocks potentially harmful links in emails, text messages, and social media if you happen to click on them by mistake, adding an extra layer of protection to your online experience.
SMiShing is a serious cyber threat that aims to exploit the trust that individuals place in their text messages. By impersonating reputable organizations and creating a sense of urgency, scammers try to trick recipients into providing personal information or clicking on malicious links. Protecting oneself from SMiShing involves understanding what it is, recognizing the threat, and adopting effective protective measures. These include being cautious of unsolicited text messages, refraining from clicking on links within these texts, and using comprehensive mobile security applications. Additionally, being aware of the red flags, such as poor grammar, unsolicited messages, and requests for sensitive information via SMS, can help in detecting potential scams. In an increasingly digital age, staying vigilant and proactive is the best way to protect your personal information from cybercriminals.
The post Understanding and Protecting Yourself from SMiShing appeared first on McAfee Blog.
In the ever-growing digital age, our mobile devices contain an alarming amount of personal, sensitive data. From emails, social media accounts, banking applications to payment apps, our personal and financial lives are increasingly entwined with the convenience of online, mobile platforms. However, despite the increasing threat to cyber security, it appears many of us are complacent about protecting our mobile devices.
Survey revealed that many mobile users still use easy-to-remember and easy-to-guess passwords. With such an increasing dependence on mobile devices to handle our daily tasks, it seems unimaginable that many of us leave our important personal data unguarded. Theft or loss of an unsecured mobile device can, and often does, result in a catastrophic loss of privacy and financial security.
The unfortunate reality of our digital era is that devices are lost, misplaced, or stolen every day. A mobile device without password protection is a gold mine for anyone with malicious intent. According to a global survey by McAfee and One Poll, many consumers are largely unconcerned about the security of their personal data stored on mobile devices. To illustrate, only one in five respondents had backed up data on their tablet or smartphone. Even more concerning, 15% admitted they saved password information on their phone.
Such statistics are troubling for several reasons. The most obvious is the risk of personal information —including banking details and online login credentials— falling into the wrong hands. A lost or stolen device is not just a device lost— it’s potentially an identity, a bank account, or worse. The lack of urgency in securing data on mobile devices speaks to a broad consumer misunderstanding about the severity of the threats posed by cybercriminals and the ease with which they can exploit an unprotected device.
→ Dig Deeper: McAfee 2023 Consumer Mobile Threat Report
Perhaps one of the most surprising findings of the survey is the difference in mobile security behaviors between men and women. This difference illustrates not just a disparity in the type of personal information each group holds dear, but also the degree of risk each is willing to accept with their mobile devices.
Broadly speaking, men tend to place greater value on the content stored on their devices, such as photos, videos, and contact lists. Women, on the other hand, appear more concerned about the potential loss of access to social media accounts and personal communication tools like email. They are statistically more likely to experience online harassment and privacy breaches. This could explain why they are more concerned about the security of their social media accounts, as maintaining control over their online presence can be a way to protect against harassment and maintain a sense of safety.
The loss of a mobile device, which for many individuals has become an extension of their social identity, can disrupt daily life significantly. This distinction illustrates that the consequences of lost or stolen mobile devices are not just financial, but social and emotional as well.
Despite the differences in what we value on our mobile devices, the survey showed a worrying level of risky behavior from both genders. Over half (55%) of respondents admitted sharing their passwords or PIN with others, including their children. This behavior not only leaves devices and data at risk of unauthorized access but also contributes to a wider culture of complacency around mobile security.
Password protection offers a fundamental layer of security for devices, yet many people still choose convenience over safety. Setting a password or PIN isn’t a failsafe method for keeping your data safe. However, it is a simple and effective starting point in the broader effort to protect our digital lives.
→ Dig Deeper: Put a PIN on It: Securing Your Mobile Devices
While the survey results raise an alarm, the good news is that we can turn things around. It all begins with acknowledging the risks of leaving our mobile devices unprotected. There are simple steps that can be taken to ramp up the security of your devices and protect your personal information.
First and foremost, password-protect all your devices. This means going beyond your mobile phone to include tablets and any other portable, internet-capable devices you may use. And, while setting a password, avoid easy ones like “1234” or “1111”. These are the first combinations a hacker will try. The more complex your password is, the sturdier a barrier it forms against unauthorized access.
Another important step is to avoid using the “remember me” function on your apps or mobile web browser. Although it might seem convenient to stay logged into your accounts for quick access, this considerably amplifies the risk if your device gets stolen or lost. It’s crucial to ensure you log out of your accounts whenever not in use. This includes email, social media, banking, payment apps, and any other accounts linked to sensitive information.
McAfee Pro Tip: If your phone is lost or stolen, employing a combination of tracking your device, locking it remotely, and erasing its data can safeguard both your phone and the information it contains. Learn more tips on how to protect your mobile device from loss and theft.
Sharing your PIN or password is also a risky behavior that should be discouraged. Admittedly, this might be challenging to implement, especially with family members or close friends. But the potential harm it can prevent in the long run far outweighs the temporary convenience it might present.
Having highlighted the importance of individual action towards secure mobile practices, it’s worth noting that investing in reliable security software can also make a world of difference. A mobile security product like McAfee Mobile Security, which offers anti-malware, web protection, and app protection, can provide a crucial extra layer of defense.
With app protection, not only are you alerted if your apps are accessing information on your mobile that they shouldn’t, but in the event that someone does unlock your device, your personal information remains safe by locking some or all of your apps. This means that even if your device falls into the wrong hands, they still won’t be able to access your crucial information.
It’s also critical to stay educated on the latest ways to protect your mobile device. Cyber threats evolve constantly, and awareness is your first line of defense. McAfee has designed a comprehensive approach to make the process of learning about mobile security not just informative but also engaging. Our array of resources includes a rich repository of blogs, insightful reports, and informative guides. These materials are meticulously crafted to provide users with a wealth of knowledge on how to protect their mobile devices, ensuring that the learning experience is not only informative but also engaging and enjoyable.
While the current state of mobile device security may seem concerning, it’s far from hopeless. By incorporating simple security practices such as setting complex passwords and avoiding shared access, we can significantly reduce the risk of unauthorized data access. Additionally, investing in trusted mobile security products like McAfee Mobile Security can provide a robust defense against advancing cyber threats. Remember, our digital lives mirror our real lives – just as we lock and secure our homes, so too must we protect our mobile devices.
The post How to Protect Your Mobile Device From Loss and Theft appeared first on McAfee Blog.
Every day, life for many consumers has become more “digital” than before—this has made day-to-day tasks easier for many of us, but it also creates new challenges. From online banking to medical records, protecting our private, personal information is imperative.
Too often, the same password is used for multiple online accounts—for instance, you might log in to your online banking site with the same password you use for your personal email account. In a McAfee survey, 34% of people reported that they use the same password for multiple online accounts. Using identical passwords is convenient for us as users, but it’s also convenient for any hacker trying to steal personal information—once a hacker has access to one of your accounts, he can use a recycled password to snoop around at will.
Certainly, using more than one password and passphrases that include a mix of upper and lower case letters, numbers, and symbols and is at least ten characters in length goes a long way towards keeping malicious people at bay, but unfortunately, merely adding variety to your login information doesn’t guarantee security. In The Easiest Ways to Not Get Hacked, author Rebecca Greenfield included this chart showing just how much difference one character in length makes:
One of the most important accounts to keep secure is your primary email account—and here’s why: sooner or later, we all have to use the “I forgot my password” option, which typically sends a password reset email.
A hacker only needs to crack the password for your primary email account, and he’ll be able to access any of your other secure accounts simply by clicking the “forgot password” button when he sees it. This is known as a single point of failure, meaning it’s the one piece in any system that can bring down your whole system.
McAfee Pro Tip: If you’re having trouble remembering all your complex passwords on multiple accounts, a password manager can help you save time and effort while securing your accounts and devices. Learn more about McAfee’s password manager.
Establishing a separate email account for registration is one idea—in other words, your “I forgot my password” emails would all be sent to an account other than your primary email account. But even in that situation, there’s still only one password between a hacker and most of the data you want to keep from a hacker’s hands—from financial accounts and bank access to your weekly grocery delivery service. So the real question, even if you’re savvy enough to have a separate email address for password rescue, is: how do you make any email account more secure?
Two-step verification (often referred to as two-factor authentication) is a system designed to give you an extra layer of security that’s easy to use and indispensable for commercial or highly sensitive accounts. Two-step verification protects your email with not only a password but also by associating your account with a specific device or devices. A recent example of how this works comes from Google. In the case of Google’s two-step verification for Gmail accounts, a user simply re-authorizes the account every 30 days, by providing a numeric code that confirms the account.
→ Dig Deeper: Two-Factor vs. Multi-Factor Authentication: What’s the Difference?
The extra step and learning a new system of security sounds like an enormous hassle, but Google has taken the pain out of the process by allowing you to obtain the code in one of three ways:
This means that a hacker who wants to access your email account can only do so if he has access to your text messages or your landline phone. It might not stop every cybercriminal, but it does make the average hacker’s job a lot harder.
McAfee Pro Tip: Some hackers may go as far as calling your personal numbers, if they have access to them, and ask for your two-factor verification code to access your financial accounts, citing that they need it for their ongoing promotions or measures to improve your account security. This is a social engineering tactic that you should familiarize yourself with. Learn more about social engineering.
This two-factor authentication, while not new, is making major inroads among websites, apps, and services that process critical information. Many corporations have used hardware-based secondary authentication codes for years, but Google and others (including Twitter) are working hard to make this enhanced authentication flow a more practical and accessible part of our working lives.
New biometric verification options, such as a retina or fingerprint scan, are also catching on among security-conscious consumers, and will likely be a feature on more devices in the future. As times change, and more sensitive information flows through these sites, we can be sure to see more of these processes put into place.
→ Dig Deeper: How Virtual Reality and Facebook Photos Helped Researchers Hack Biometric Security
Two-step verification offers multiple benefits in the world of digital security. The key merit is that it presents an extra hurdle for hackers to overcome. If a hacker has breached your password, they still have to pass the second level of verification. As such, two-step verification makes your information harder to access, giving you added peace of mind.
Apart from enhancing security, two-step verification simplifies the recovery process if you ever forget your password. Since you have set up a secondary recovery method, you can use it to reset your password. This reduces the risk of losing access to your account due to forgotten passwords.
→ Dig Deeper: Let’s Make Security Easy
Setting up two-step verification on your accounts is relatively straightforward process. The first step is to go to the account settings of the platform where you want to enable this feature. Once you are there, locate the two-step verification or two-factor authentication option. Click on it, and follow the prompts. Typically, the system will ask for your phone number or an alternative email address to send the verification code to complete the process. Once that is done, you are all set.
From then on, every time you log in, you will need to input not only your password but also a unique code sent to your phone number or alternative email. Remember to choose a method that is convenient for you. For instance, if you are always on your phone, it may be easier to opt for the text message verification code option. This ensures that you can always promptly complete the second step of verification whenever you log in.
→ Dig Deeper: Protect Your Social Passwords with Two-Step Verification
While two-step verification offers an added layer of security, it is not foolproof. One potential challenge is that a hacker could intercept the verification code. Despite its rarity, this type of security breach is possible and has occurred. Furthermore, you might face issues if you lose the device used for verification. For example, if you lose your phone and have set it up for receiving verification codes, you might struggle to access your accounts.
Moreover, two-step verification can be inconvenient for some people. It adds an extra step every time you log in, and if you do not have immediate access to your verification device, you might be locked out of your accounts. Despite these challenges, the benefits of two-step verification far outweigh the potential drawbacks, and it remains a robust and recommended security measure in the digital era.
In conclusion, two-step verification offers a critical layer of security in protecting your digital assets. As life becomes increasingly digitized, and we continue to store more personal and sensitive information online, it is crucial to employ strong security measures like two-step verification. While it might seem like a bit of a hassle at times, the added security it provides, the peace of mind and the protection of your personal information make it a worthwhile endeavor. As the old saying goes, “It’s better to be safe than sorry.”
Therefore, embrace two-step verification and make it harder for hackers to gain access to your information. After all, security in the digital sphere is not a luxury, but a necessity.
To further protect your digital assets, consider McAfee+, our most comprehensive online protection software. Protect your mobile, laptops, computers, and IoT devices with reputable security software.
The post Make a Hacker’s Job Harder with Two-step Verification appeared first on McAfee Blog.
In the last decade, Bitcoin has emerged as a revolutionary form of digital asset, disrupting traditional financial markets along the way. Unlike traditional currencies issued by national governments (fiat money), Bitcoin is a decentralized form of money operated via a peer-to-peer network. This means it is not regulated or controlled by any central authority or government. This, along with many other characteristics, offers a range of benefits but also poses certain risks. In this article, we will examine these advantages and challenges to help you evaluate whether the benefits of Bitcoin outweigh the risks.
Bitcoin was created in 2009 by an anonymous person or group of people using the pseudonym Satoshi Nakamoto. As the first cryptocurrency, Bitcoin introduced a new kind of money that is issued and managed without the need for a central authority. Not only is Bitcoin a single unit of currency (simply referred to as a “bitcoin”), but it is also the decentralized, peer-to-peer network that enables the movement of that currency.
Bitcoin transactions are verified by network nodes through cryptography and recorded on a public ledger called blockchain. A user can access his or her bitcoins from anywhere in the world, as long as they have the private key to their unique Bitcoin address. Now, let’s delve into the inherent benefits and risks associated with Bitcoin.
This digital cryptocurrency has gained immense popularity and continues to capture the imagination of investors, tech enthusiasts, and financial experts alike. As we dive into the world of Bitcoin, let’s also uncover the myriad benefits it brings to the table, from decentralization and security to financial inclusion and innovation.
As a decentralized form of currency, Bitcoin is not subject to control by any government, bank, or financial institution. This ensures that the value of Bitcoin is not affected by monetary policies or economic conditions of any specific country. It also means there is no need for intermediaries, such as banks, to process transactions. As a result, Bitcoin transactions can be faster and cheaper than traditional money transfers, particularly for international transactions.
Furthermore, this decentralization offers potential benefits in regions where the local currency is unstable or access to banking is limited. For those without bank accounts, Bitcoin provides an alternative way to store and transact money. It also provides a safeguard against the risks of government-controlled fiat currency, such as inflation or deflation. This property of Bitcoin has been particularly attractive in countries experiencing hyperinflation, such as Venezuela.
Bitcoin transactions are recorded on a public ledger, the blockchain, which is accessible to anyone. This ensures a high level of transparency, as the flow of Bitcoins and the transactions can be tracked by anyone. Nonetheless, while transactions are public, the identities of the parties involved are pseudonymous. This offers a level of privacy and anonymity to users, as their real-world identities are not directly connected to their Bitcoin addresses, offering more privacy than traditional banking systems.
Moreover, because of its immutable and transparent nature, Bitcoin has potential uses beyond being a currency. The underlying blockchain technology has numerous potential applications, including secure sharing of medical records, supply chain management, and secure transfer of assets like land deeds and other legal documents.
→Dig Deeper: Demystifying Blockchain: Sifting Through Benefits, Examples and Choices
Bitcoin stands as both an enigma and a harbinger of change. Its meteoric rise to prominence has captivated the world, yet it has also garnered its fair share of scrutiny and caution. Now, let’s examine the flip side of the digital coin – the risks that come with it.
Price Volatility
One of the most well-known risks of Bitcoin is its price volatility. The value of a bitcoin can increase or decrease dramatically over a very short period. This volatility can result in significant financial loss. While some traders may enjoy this volatility because it provides exciting opportunities for high-return investments, it can be a risky venture for those seeking stability, particularly for those who intend to use Bitcoin as a regular currency.
The volatility also makes Bitcoin less feasible as a store of value. With traditional currencies, individuals can expect the purchasing power of their money to remain relatively stable over short periods of time. With Bitcoin, however, the purchasing power can fluctuate wildly from day to day.
While the Bitcoin network itself has remained secure since its inception, the ecosystem around it is not entirely secure. Bitcoin wallets and exchanges, which are necessary for users to store and trade Bitcoins, have been the targets of hacking in the past. In some instances, users have lost their entire Bitcoin holdings.
Bitcoin transactions are irreversible. Once a transaction is initiated, it cannot be reversed. If the transaction is fraudulent or a mistake has been made, it cannot be corrected. This risk factor demands a high level of care and caution by Bitcoin users. The anonymity of Bitcoin can also facilitate criminal activities such as money laundering and the buying and selling illegal goods, which can impact users indirectly.
→Dig Deeper: Crypto Scammers Exploit: Elon Musk Speaks on Cryptocurrency
Bitcoin operates in a relatively gray area of law and regulation. While it is not illegal, its status varies widely around the world. Some countries have embraced Bitcoin as a legitimate payment method, while others have banned or restricted it. The variability of regulation creates uncertainty and poses a risk for Bitcoin users. There’s also a risk that future regulation could adversely affect Bitcoin. For instance, if a major government declared Bitcoin use illegal, or one of the world’s largest exchanges was hacked, the value of Bitcoin could plummet.
Due to Bitcoin’s decentralized nature, lawmakers and regulatory bodies may find it difficult to draft and implement effective regulations that do not stifle innovation. The digital nature of Bitcoin also poses challenges with legal protections that are generally applied to traditional instruments, such as the ability to challenge fraudulent transactions.
→Dig Deeper: Cryptohacking: Is Cryptocurrency Losing Its Credibility?
When comparing the benefits and risks of Bitcoin, it becomes clear that this cryptocurrency presents both unique opportunities and challenges. On the positive side, its decentralized and peer-to-peer nature offers a level of independence and flexibility not found in traditional financial systems. Additionally, its underlying blockchain technology offers potential for numerous applications beyond cryptocurrency itself.
However, these benefits must be weighed against the risks they pose, including its high price volatility and security issues, and the potential consequences of an uncertain regulatory environment. These risks underline the need for caution and due diligence before investing in or transacting with Bitcoin.
As the first cryptocurrency, Bitcoin is still in its early stages and will likely continue to evolve. As its regulatory environment becomes clearer and its technology becomes more established, the risks associated with Bitcoin may decrease. However, until then, a balanced perspective on the benefits and risks of Bitcoin is essential for anyone considering participating in its network.
McAfee Pro Tip: Bitcoin’s security issues are one of the main risks you need to consider and watch out for if you wish to invest in Bitcoin. Traditional or cryptocurrency, learn how to protect your finances online.
In a remarkably short time, Bitcoin has evolved from a fringe concept to a global financial phenomenon, challenging conventional notions of currency and decentralization. While its disruptive potential, innovation, and the allure of financial autonomy are undeniable, Bitcoin’s journey is punctuated with volatility, regulatory ambiguities, and security concerns that demand cautious consideration. As it continues to capture the world’s imagination, Bitcoin stands as both a symbol of the digital age’s possibilities and a stark reminder of the complexities and challenges associated with redefining the future of finance. Its ultimate role in the global economy remains uncertain, but its impact on the way we perceive and utilize money is undeniable, solidifying its place in history as a transformative force in the world of finance.
As individuals, it is essential to safeguard your digital assets, traditional financial resources, and online financial dealings to ensure a secure and unrestricted existence in the modern world. That’s why we encourage you to improve your digital security. Check out our McAfee+ and Total Protection to boost your protection.
The post Do the Benefits of Bitcoin Outweigh the Risks? appeared first on McAfee Blog.
The rise in popularity of Internet-connected smart devices has brought about a new era of convenience and functionality for consumers. From Smart TVs and refrigerators to wireless speakers, these devices have transformed the way we live and communicate. However, this advancement in technology is not without its downsides. One of the most notable is the increasing vulnerability to cyber-attacks. In this article, we’ll explore what happened when hundreds of thousands of these devices were roped into an extensive Internet-of-Things (IoT) cyber attack, how it happened, and how you can protect your smart devices to stay safe.
In what has been termed as the first widespread IoT cyber attack, security researchers discovered that over 100,000 smart home devices were manipulated to form a malicious network. This network, dubbed ‘ThingBot,’ was used to launch a massive phishing campaign, sending out approximately 750,000 spam emails over a two-week period.
The key players in this attack were the smart home appliances that many of us use every day. They range from Smart TVs and refrigerators to wireless speakers, all of which were connected to the internet. The attack signified two key developments: the rise of the IoT phenomenon and the substantial security threats posed by these increasingly connected devices.
→ Dig Deeper: LG Smart TVs Leak Data Without Permission
IoT refers to the growing trend of everyday devices becoming more connected to the web. This connection aims to bring added convenience and ease to our daily activities. It ranges from wearable devices like FitBit and Google Glass to smart TVs, thermostats, and computerized cars. While this trend is new and rapidly growing, its implications for security are significant.
The discovery of the IoT botnet in this attack demonstrates just how easily hackers can commandeer these connected smart devices. One would think that security software installed on PCs would provide adequate protection. Unfortunately, that’s not the case. The new generation of connected appliances and wearables does not come with robust security measures. This deficiency is the reason why hackers were able to infect more than 100,000 home devices in a global attack, manipulating these devices to send out their malicious messages.
→ Dig Deeper: The Wearable Future Is Hackable. Here’s What You Need To Know
Cybercriminals will continue to exploit the inherent insecurities in the IoT landscape. With the number of connected or “smart” devices projected to increase exponentially in the coming years (reaching an estimated 200 billion IoT devices by 2020). Here’s a list of those implications users can expect:
Prevention and precaution are the best defense against IoT cyber attacks. The first step is to secure your devices with a password. While it may seem simple and obvious, many consumers disregard this step, leaving their devices vulnerable to attacks. Using unique, complex passwords and frequently updating them can help to safeguard against hacking attempts. Furthermore, consider employing two-step verification for devices that offer this feature for additional security.
One must not forget the importance of software updates. Internet-connected devices such as smart TVs and gaming consoles often come with software that needs regular updating. Manufacturers typically release these updates to patch known security vulnerabilities. Hence, whenever there’s an update, it’s wise to install it promptly. It’s also crucial to exercise caution while browsing the internet on these devices. Avoid clicking links from unknown senders and do not fall for deals that appear too good to be true, as these are common phishing tactics.
→ Dig Deeper: Why Software Updates Are So Important
Before purchasing any IoT device, perform thorough research on the product and the manufacturer. Investigate the company’s security policies and understand the ease with which the product can be updated. In case of any doubts about the security of the device, don’t hesitate to reach out to the manufacturer for clarification. Remember, your security is paramount and deserves this level of attention.
Lastly, it’s vital to protect your mobile devices. Most IoT devices are controlled via smartphones and tablets, making them potential targets for hackers. Ensuring that these devices are secured helps to protect your IoT devices from being compromised. Services like McAfee LiveSafe™ offer comprehensive mobile security that provides real-time protection against mobile viruses, spam, and more, which significantly reduces the chances of a security breach.
McAfee Pro Tip: McAfee LiveSafe doesn’t just protect against mobile viruses. You can safeguard an unlimited number of your personal devices throughout the entire duration of your subscription. So, be sure to connect all your devices for optimal security.
As technology advances and the Internet-of-Things continues to expand, the security challenges associated with it will persist. The first global IoT cyber attack served as a wakeup call for both consumers and manufacturers about the potential security threats that come with the convenience of smart devices. It is essential for individual users to take proactive steps to secure their devices and for manufacturers to continually improve the security features of their products. By working together, we can enjoy the benefits of IoT without compromising our security. And by investing in reliable cybersecurity solutions like McAfee+, Total Protection, and Live Safe, you can enhance your defense against potential attacks and enjoy the benefits of IoT with greater peace of mind.
The post Smart TVs and Refrigerators Used in Internet-of-Things Cyberattack appeared first on McAfee Blog.
In the age of digital data and Internet access, the potential for scams is more significant than ever. These scams often involve leveraging popular search queries to trap unsuspecting netizens into their malicious schemes. Among the top searches in the online world, celebrities hold a prime spot. Through this guide, we aim to shed light on how scammers take advantage of the global fascination with celebrities to target their potential victims.
As digital users, most of us are likely well-acquainted with the phrase “Just Google it.” The search engine has become a go-to source for any information ranging from essential daily needs to entertainment gossip. But it’s crucial to remember that while you’re in pursuit of data, scammers are in search of their next victim.
Scammers have significantly evolved with the advancement of technology. They’ve mastered the art of creating fake or infected websites that can harm your computer systems, extract your financial information, or even steal your identity. Their strategies often include luring victims through popular searches, such as the latest Twitter trends, breaking news stories, major world events, downloads, or even celebrity images and gossip. The higher the popularity of the search, the greater the risk of encountering harmful results.
McAfee has conducted research for six consecutive years on popular celebrities to reveal which ones are riskiest to search for online. For instance, Emma Watson outplaced Heidi Klum as the most dangerous celebrity to look up online. Interestingly, it was the first year that the top 10 list comprised solely of women. Cybercriminals commonly exploit the names of such popular celebrities to lead users to websites loaded with malicious software, consequently turning an innocent search for videos or pictures into a malware-infected nightmare.
→ Dig Deeper: Emma Watson Video Scam: Hackers Use Celeb’s Popularity to Unleash Viruses
Scammers are well aware of the allure the word “free” holds for most Internet users. They cleverly exploit this to get your attention and draw you into their traps. For instance, when you search for “Beyonce” or “Taylor Swift” followed by prompts like “free downloads”, “Beyonce concert photos”, or “Taylor Swift leaked songs”, you expose yourself to potential online threats aiming to steal your personal information. It’s always prudent to maintain a healthy level of skepticism when encountering offers that seem too good to be true, especially those labeled as “free.”
While the internet can be a dangerous playground, it doesn’t mean that you cannot protect yourself effectively. Using common sense, double-checking URLs, utilizing safe search plugins, and having comprehensive security software are some strategies to help ensure your online safety. This guide aims to provide you with insights and tools to navigate the online world without falling prey to its many hidden dangers.
Truth be told, the responsibility for online safety lies primarily with the user. Just as you would not walk into any shady-looking place in real life, it requires a similar instinct to avoid shady sites while browsing online. One important piece of advice – if something appears too good to be true, in all probability, it is. So, take note of these practical tips to help you guard against celebrity scams and other online threats:
→ Dig Deeper: How to Tell Whether a Website Is Safe or Unsafe
→ Dig Deeper: The Big Reason Why You Should Update Your Browser (and How to Do It)
Having comprehensive security software installed on your devices is another crucial step towards preventing scams. Good antivirus software can protect against the latest threats, alert you about unsafe websites, and even detect phishing attempts. Furthermore, always keep your security software and all other software updated. Cybercriminals are known to exploit vulnerabilities in outdated software to infiltrate your devices and steal your data.
Apart from ensuring you have security software, be cautious about what you download on your devices. Trojans, viruses, and malware are often hidden in downloadable files, especially in sites that offer ‘free’ content. Cybercriminals tempting users to download infected files often use popular celebrity names. Therefore, download wisely and from reputed sources.
McAfee Pro Tip: Before committing to a comprehensive security plan, it’s crucial to evaluate your security protection and analyze your requirements. This proactive stance forms the bedrock for crafting strong cybersecurity measures that cater precisely to your unique needs and potential vulnerabilities. For more information about our acclaimed security solutions, explore our range of products.
In the digital world, where information and entertainment are available at our fingertips, it’s crucial to remain vigilant against scams, especially those involving celebrities. By exercising prudent online practices like scrutinizing URLs, using safe search plugins, and installing comprehensive security software, we can significantly reduce our risk of falling prey to these scams.
It’s imperative to understand that the popularity of a search term or trend is directly proportional to the risk it carries. So next time, before you search for your favorite celebrity, remember, the more famous the celebrity, the greater the risk. Together with McAfee, let’s promote safer browsing practices and contribute to a safer online community for all.
The post Celebrities Are Lures For Scammers appeared first on McAfee Blog.
One of the essential aspects of digital security resides in the strength of our passwords. While they are the most convenient and effective way to restrict access to our personal and financial information, the illusion of a fully secure password does not exist. The reality is that we speak in terms of less or more secure passwords. From a practical perspective, we must understand the behind-the-scenes actions that could potentially compromise our passwords and consequently, our digital lives.
Unfortunately, most users frequently overlook this crucial part of their digital existence. They remain largely ignorant of numerous common techniques that hackers employ to crack passwords, leading to the potential loss of personal details, financial information, or even identity theft. Therefore, this blog aims to enlighten readers on how they might be unknowingly making their passwords vulnerable.
Passwords serve as the first line of defense against unauthorized access to our online accounts, be it email, social media, banking, or other sensitive platforms. However, the unfortunate reality is that not all passwords are created equal, and many individuals and organizations fall victim to password breaches due to weak or compromised credentials. Let’s explore the common techniques for cracking passwords, and learn how to stay one step ahead in the ongoing battle for online security.
In the world of cyber-attacks, dictionary attacks are common. This approach relies on using software that plugs common words into the password fields in an attempt to break in. It’s an unfortunate fact that free online tools exist to make this task almost effortless for cybercriminals. This method spells doom for passwords that are based on dictionary words, common misspellings, slang terms, or even words spelled backward. Likewise, using consecutive keyboard combinations such as qwerty or asdfg is equally risky. An excellent practice to deflect this attack is to use unique character combinations that make dictionary attacks futile.
Besides text-based passwords, these attacks also target numeric passcodes. When over 32 million passwords were exposed in a breach, nearly 1% of the victims used ‘123456’ as their password. Close on its heels, ‘12345’ was the next most popular choice, followed by similar simple combinations. The best prevention against such attacks is avoiding predictable and simple passwords.
→ Dig Deeper: Cracking Passwords is as Easy as “123”
While security questions help in password recovery, they also present a potential vulnerability. When you forget your password and click on the ‘Forgot Password’ link, the website generally poses a series of questions to verify your identity. The issue here is that many people use easily traceable personal information such as names of partners, children, other family members, or pets as their answers, some of which can be found on social media profiles with little effort. To sidestep this vulnerability, it’s best not to use easily accessible personal information as the answer to security questions.
McAfee Pro Tip: Exercise caution when sharing content on social media platforms. Avoid making all your personal information publicly accessible to thwart hackers from gathering sensitive details about you. Learn more about the dangers of oversharing on social media here.
A common mistake that many internet users make is reusing the same password for multiple accounts. This practice is dangerous as if one data breach compromises your password, the hackers can potentially gain access to other websites using the same login credentials. According to a report published by LastPass in 2022, a recent breach revealed a shocking password reuse rate of 31% among its victims. Hence, using unique passwords for each of your accounts significantly reduces the risk associated with password reuse.
Moreover, it’s also advisable to keep changing your passwords regularly. While this might seem like a hassle, it is a small price to pay for ensuring your digital security. Using a password manager can help you remember and manage different passwords for different websites.
Social Engineering is a non-technical strategy that cybercriminals use, which relies heavily on human interaction and psychological manipulation to trick people into breaking standard security procedures. They lure their unsuspecting victims into revealing confidential data, especially passwords. Therefore, vigilance and skepticism are invaluable weapons to have in your arsenal to ward off such attacks.
The first step here would be not to divulge your password to anyone, no matter how trustworthy they seem. You should also be wary of unsolicited calls or emails asking for your sensitive information. Remember, legitimate companies will never ask for your password through an email or a phone call.
Despite the vulnerabilities attached to passwords, much can be done to enhance their security. For starters, creating a strong password is the first line of defense. To achieve this, you need to use a combination of uppercase and lowercase letters, numbers, and symbols. Making the password long, at least 12 to 15 characters, significantly improves its strength. It’s also advisable to avoid using common phrases or strings of common words as passwords- they can be cracked through advanced versions of dictionary attacks.
In addition to creating a strong password, adopting multi-factor authentication can greatly enhance your account security. This technology requires more than one form of evidence to verify your identity. It combines something you know (your password), something you have (like a device), and something you are (like your fingerprint). This makes it more difficult for an attacker to gain access even if they have your password.
→ Dig Deeper: 15 Tips To Better Password Security
The future of passwords looks promising. Scientists and tech giants are working relentlessly to develop stronger and more efficient access control tools. Biometrics, dynamic-based biometrics, image-based access, and hardware security tokens are some of the emerging technologies promising to future-proof digital security. With biometrics, users will no longer need to remember complex passwords as access will be based on unique personal features such as fingerprints or facial recognition.
Another promising direction is the use of hardware security tokens, which contain digital certificates to authenticate the user. These tokens can be used in combination with a password to provide two-factor authentication. This makes it more difficult for an attacker to gain access as they would need both your token and your password. While these technologies are still developing, they suggest a future where access control is more secure and user-friendly.
In conclusion, while there’s no such thing as a perfectly secure password, much can be done to enhance their security. Understanding the common techniques for cracking passwords, such as dictionary attacks and security questions’ exploitation, is the first step towards creating more secure passwords. Using unique complex passwords, combined with multi-factor authentication and software tools like McAfee’s True Key, can greatly improve the security of your accounts.
The future of passwords looks promising with the development of biometrics and hardware security tokens. Until then, it’s crucial to adopt the best password practices available to protect your digital life. Remember, your online security is highly dependent on the strength and uniqueness of your passwords, so keep them complex, unique, and secure.
The post What Makes My Passwords Vulnerable? appeared first on McAfee Blog.
It’s that time of year again: election season! You already know what to expect when you flip on the TV. Get ready for a barrage of commercials, each candidate saying enough to get you to like them but nothing specific enough to which they must stay beholden should they win.
What you might not expect is for sensationalist election “news” to barge in uninvited on your screens. Fake news – or exaggerated or completely falsified articles claiming to be unbiased and factual journalism, often spread via social media – can pop up anytime and anywhere. This election season’s fake news machine will be different than previous years because of the emergence of mainstream artificial intelligence tools.
Here are a few ways desperate zealots may use various AI tools to stir unease and spread misinformation around the upcoming election.
We’ve had time to learn and operate by the adage of “Don’t believe everything you read on the internet.” But now, thanks to deepfake, that lesson must extend to “Don’t believe everything you SEE on the internet.” Deepfake is the digital manipulation of a video or photo. The result often depicts a scene that never happened. At a quick glance, deepfakes can look very real! Some still look real after studying them for a few minutes.
People may use deepfake to paint a candidate in a bad light or to spread sensationalized false news reports. For example, a deepfake could make it look like a candidate flashed a rude hand gesture or show a candidate partying with controversial public figures.
According to McAfee’s Beware the Artificial Imposter report, it only takes three seconds of authentic audio and minimal effort to create a mimicked voice with 85% accuracy. When someone puts their mind to it and takes the time to hone the voice clone, they can achieve a 95% voice match to the real deal.
Well-known politicians have thousands of seconds’ worth of audio clips available to anyone on the internet, giving voice cloners plenty of samples to choose from. Fake news spreaders could employ AI voice generators to add an authentic-sounding talk track to a deepfake video or to fabricate a snappy and sleazy “hot mike” clip to share far and wide online.
Programs like ChatGPT and Bard can make anyone sound intelligent and eloquent. In the hands of rabble-rousers, AI text generation tools can create articles that sound almost professional enough to be real. Plus, AI allows people to churn out content quickly, meaning that people could spread dozens of fake news reports daily. The number of fake articles is only limited by the slight imagination necessary to write a short prompt.
Before you get tricked by a fake news report, here are some ways to spot a malicious use of AI intended to mislead your political leanings:
Is what you’re reading or seeing or hearing too bizarre to be true? That means it probably isn’t. If you’re interested in learning more about a political topic you came across on social media, do a quick search to corroborate a story. Have a list of respected news establishments bookmarked to make it quick and easy to ensure the authenticity of a report.
If you encounter fake news, the best way you can interact with it is to ignore it. Or, in cases where the content is offensive or incendiary, you should report it. Even if the fake news is laughably off-base, it’s still best not to share it with your network, because that’s exactly what the original poster wants: For as many people as possible to see their fabricated stories. All it takes is for someone within your network to look at it too quickly, believe it, and then perpetuate the lies.
It’s great if you’re passionate about politics and the various issues on the ballot. Passion is a powerful driver of change. But this election season, try to focus on what unites us, not what divides us.
The post This Election Season, Be on the Lookout for AI-generated Fake News appeared first on McAfee Blog.
Are we in the Wild West? Scrolling through your social media feed and breaking news sites, all the mentions of artificial intelligence and its uses makes the online world feel like a lawless free-for-all.
While your school or workplace may have rules against using AI for projects, as of yet there are very few laws regulating the usage of mainstream AI content generation tools. As the technology advances are laws likely to follow? Let’s explore.
As of August 2023, there are no specific laws in the United States governing the general public’s usage of AI. For example, there are no explicit laws banning someone from making a deepfake of a celebrity and posting it online. However, if a judge could construe the deepfake as defamation or if it was used as part of a scam, it could land the creator in hot water.
The White House issued a draft of an artificial intelligence bill of rights that outlines best practices for AI technologists. This document isn’t a law, though. It’s more of a list of suggestions to urge developers to make AI unbiased, as accurate as possible, and to not completely rely on AI when a human could perform the same job equally well.1 The European Union is in the process of drafting the EU AI Act. Similar to the American AI Bill of Rights, the EU’s act is mostly directed toward the developers responsible for calibrating and advancing AI technology.2
China is one country that has formal regulations on the use of AI, specifically deepfake. A new law states that a person must give express consent to allow their faces to be used in a deepfake. Additionally, the law bans citizens from using deepfake to create fake news reports or content that could negatively affect the economy, national security, or China’s reputation. Finally, all deepfakes must include a disclaimer announcing that the content was synthesized.3
As scammers, edgy content creators, and money-conscious executives push the envelope with deepfake, AI art, and text-generation tools, laws governing AI use may be key to stopping the spread of fake news and protect people’s livelihoods and reputations.
Deepfake challenges the notion that “seeing is believing.” Fake news reports can be dangerous to society when they encourage unrest or spark widespread outrage. Without treading upon the freedom of speech, is there a way for the U.S. and other countries to regulate deepfake creators with intentions of spreading dissent? China’s mandate that deepfake content must include a disclaimer could be a good starting point.
The Writers Guild of America (WGA) and Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strikes are a prime example of how unbridled AI use could turn invasive and impact people’s jobs. In their new contract negotiations, each union included an AI clause. For the writers, they’re asking that AI-generated text not be allowed to write “literary material.” Actors are arguing against the widespread use of AI-replication of actors’ voices and faces. While deepfake could save (already rich) studios millions, these types of recreations could put actors out of work and would allow the studio to use the actors’ likeness in perpetuity without the actors’ consent.4 Future laws around the use of AI will likely include clauses about consent and assuming the risk text generators introduce to any project.
In the meantime, while the world awaits more guidelines and firm regulations governing mainstream AI tools, the best way you can interact with AI in daily life is to do so responsibly and in moderation. This includes using your best judgement and always being transparent when AI was involved in a project.
Overall, whether in your professional and personal life, it’s best to view AI as a partner, not as a replacement.
1The White House, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People”
2European Parliament, “EU AI Act: first regulation on artificial intelligence”
3CNBC, “China is about to get tougher on deepfakes in an unprecedented way. Here’s what the rules mean”
4NBC News, “Actors vs. AI: Strike brings focus to emerging use of advanced tech”
The post The Wild West of AI: Do Any AI Laws Exist? appeared first on McAfee Blog.
It’s not all funny limericks, bizarre portraits, and hilarious viral skits. ChatGPT, Bard, DALL-E, Craiyon, Voice.ai, and a whole host of other mainstream artificial intelligence tools are great for whiling away an afternoon or helping you with your latest school or work assignment; however, cybercriminals are bending AI tools like these to aid in their schemes, adding a whole new dimension to phishing, vishing, malware, and social engineering.
Here are some recent reports of AI’s use in scams plus a few pointers that might tip you off should any of these happen to you.
Vishing – or phishing over the phone – is not a new scheme; however, AI voice mimickers are making these scamming phone calls more believable than ever. In Arizona, a fake kidnapping phone call caused several minutes of panic for one family, as a mother received a demand for ransom to release her alleged kidnapped daughter. On the phone, the mother heard a voice that sounded exactly like her child’s, but it turned out to be an AI-generated facsimile.
In reality, the daughter was not kidnapped. She was safe and sound. The family didn’t lose any money because they did the right thing: They contacted law enforcement and kept the scammer on the phone while they located the daughter.1
Imposter scams accounted for a loss of $2.6 billion in the U.S. in 2022. Emerging AI scams could increase that staggering total. Globally, about 25% of people have either experienced an AI voice scam or know someone who has, according to McAfee’s Beware the Artificial Imposter report. Additionally, the study discovered that 77% of voice scam targets lost money as a result.
No doubt about it, it’s frightening to hear a loved one in distress, but try to stay as calm as possible if you receive a phone call claiming to be someone in trouble. Do your best to really listen to the “voice” of your loved one. AI voice technology is incredible, but there are still some kinks in the technology. For example, does the voice have unnatural hitches? Do words cut off just a little too early? Does the tone of certain words not quite match your loved one’s accent? To pick up on these small details, a level head is necessary.
What you can do as a family today to avoid falling for an AI vishing scam is to agree on a family password. This can be an obscure word or phrase that is meaningful to you. Keep this password to yourselves and never post about it on social media. This way, if a scammer ever calls you claiming to have or be a family member, this password could determine a fake emergency from a real one.
Deepfake, or the digital manipulation of an authentic image, video, or audio clip, is an AI capability that unsettles a lot of people. It challenges the long-held axiom that “seeing is believing.” If you can’t quite believe what you see, then what’s real? What’s not?
The FBI is warning the public against a new scheme where cybercriminals are editing explicit footage and then blackmailing innocent people into sending money or gift cards in exchange for not posting the compromising content.2
Deepfake technology was also at the center of an incident involving a fake ad. A scammer created a fake ad depicting Martin Lewis, a trusted finance expert, advocating for an investment venture. The Facebook ad attempted to add legitimacy to its nefarious endeavor by including the deepfaked Lewis.3
No response is the best response to a ransom demand. You’re dealing with a criminal. Who’s to say they won’t release their fake documents even if you give in to the ransom? Involve law enforcement as soon as a scammer approaches you, and they can help you resolve the issue.
Just because a reputable social media platform hosts an advertisement doesn’t mean that the advertiser is a legitimate business. Before buying anything or investing your money with a business you found through an advertisement, conduct your own background research on the company. All it takes is five minutes to look up its Better Business Bureau rating and other online reviews to determine if the company is reputable.
To identify a deepfake video or image, check for inconsistent shadows and lighting, face distortions, and people’s hands. That’s where you’ll most likely spot small details that aren’t quite right. Like AI voices, deepfake technology is often accurate, but it’s not perfect.
Content generation tools have some safeguards in place to prevent them from creating text that could be used illegally; however, some cybercriminals have found ways around those rules and are using ChatGPT and Bard to assist in their malware and phishing operations. For example, if a criminal asked ChatGPT to write a key-logging malware, it would refuse. But if they rephrased and asked it to compose code that captures keystrokes, it may comply with that request. One researcher demonstrated that even someone with little knowledge of coding could use ChatGPT, thus making malware creation simpler and more available than ever.4 Similarly, AI text generation tools can create convincing phishing emails and create them quickly. In theory, this could speed up a phisher’s operation and widen their reach.
You can avoid AI-generated malware and phishing correspondences the same way you deal with the human-written variety: Be careful and distrust anything that seems suspicious. To steer clear of malware, stick to websites you know you can trust. A safe browsing tool like McAfee web protection – which is included in McAfee+ – can doublecheck that you stay off of sketchy websites.
As for phishing, when you see emails or texts that demand a quick response or seem out of the ordinary, be on alert. Traditional phishing correspondences are usually riddled with typos, misspellings, and poor grammar. AI-written lures are often written well and rarely contain errors. This means that you must be diligent in vetting every message in your inbox.
While the debate about regulating AI heats up, the best thing you can do is to use AI responsibly. Be transparent when you use it. And if you suspect you’re encountering a malicious use of AI, slow down and try your best to evaluate the situation with a clear mind. AI can create some convincing content, but trust your instincts and follow the above best practices to keep your money and personal information out of the hands of cybercriminals.
1CNN, “‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping”
2NBC News, “FBI warns about deepfake porn scams”
3BBC, “Martin Lewis felt ‘sick’ seeing deepfake scam ad on Facebook”
4Dark Reading, “Researcher Tricks ChatGPT Into Building Undetectable Steganoraphy Malware”
The post AI in the Wild: Malicious Applications of Mainstream AI Tools appeared first on McAfee Blog.
Are you skeptical about mainstream artificial intelligence? Or are you all in on AI and use it all day, every day?
The emergence of AI in daily life is streamlining workdays, homework assignments, and for some, personal correspondences. To live in a time where we can access this amazing technology from the smartphones in our pockets is a privilege; however, overusing AI or using it irresponsibly could cause a chain reaction that not only affects you but your close circle and society beyond.
Here are four tips to help you navigate and use AI responsibly.
Artificial intelligence certainly earns the “intelligence” part of its name, but that doesn’t mean it never makes mistakes. Make sure to proofread or review everything AI creates, be it written, visual, or audio content.
For instance, if you’re seeking a realistic image or video, AI often adds extra fingers and distorts faces. Some of its creations can be downright nightmarish! Also, there’s a phenomenon known as an AI hallucination. This occurs when the AI doesn’t admit that it doesn’t know the answer to your question. Instead, it makes up information that is untrue and even fabricates fake sources to back up its claims.
One AI hallucination landed a lawyer in big trouble in New York. The lawyer used ChatGPT to write a brief, but he didn’t double check the AI’s work. It turns out the majority of the brief was incorrect.1
Whether you’re a blogger with thousands of readers or you ask AI to write a little blurb to share amongst your friends or coworkers, it is imperative to edit everything that an AI tool generates. Not doing so could start a rumor based on a completely false claim.
If you use AI to do more than gather a few rough ideas, you should cite the tool you used as a source. Passing off an AI’s work as your own could be considered cheating in the eyes of teachers, bosses, or critics.
There’s a lot of debate about whether AI has a place in the art world. One artist entered an image to a photography contest that he secretly created with AI. When his submission won the contest, the photographer revealed AI’s role in the image and gave up his prize. The photographer intentionally kept AI out of the conversation to prove a point, but imagine if he kept the image’s origin to himself.2 Would that be fair? When other photographers had to wait for the perfect angle of sunlight or catch a fleeting moment in time, should an AI-generated image with manufactured lighting and static subjects be judged the same way?
Even if you don’t personally use AI, you’re still likely to encounter it daily, whether you realize it or not. AI-generated content is popular on social media, like the deepfake video game battles between politicians.3 (A deepfake is a manipulation of a photo, video, or audio clip that depicts something that never happened.) The absurdity of this video series is likely to tip off the viewer to its playful intent, though it’s best practice to add a disclaimer to any deepfake.
Some deepfake have a malicious intent on top of looking and sounding very realistic. Especially around election time, fake news reports are likely to swirl and discredit the candidates. A great rule of thumb is: If it seems too fantastical to be true, it likely isn’t. Sometimes all it takes is five minutes to guarantee the authenticity of a social media post, photo, video, or news report. Think critically about the authenticity of the report before sharing. Fake news reports spread quickly, and many are incendiary in nature.
According to “McAfee’s Modern Love Research Report,” 26% of respondents said they would use AI to write a love note; however, 49% of people said that they’d feel hurt if their partner tasked a machine with writing a love note instead of writing one with their own human heart and soul.
Today’s AI is not sentient. That means that even if the final output moved you to tears or to laugh out loud, the AI itself doesn’t truly understand the emotions behind what it creates. It’s simply using patterns to craft a reply to your prompt. Hiding or funneling your true feelings into a computer program could result in a shaky and secretive relationship.
Plus, if everyone relied upon AI content generation tools like ChatGPT, Bard, and Copy.ai, then how can we trust any genuine display of emotion? What would the future of novels, poetry, and even Hollywood look like?
Responsible AI is a term that governs the responsibilities programmers have to society to ensure they populate AI systems with bias-free and accurate data. OpenAI (the organization behind ChatGPT and DALL-E) vows to act in “the best interests of humanity.”4 From there, the everyday people who interact with AI must similarly act in the best interests of themselves and those around them to avoid unleashing the dangers of AI upon society.
The capabilities of AI are vast, and the technology is getting more sophisticated by the day. To ensure that the human voice and creative spirit doesn’t permanently take on a robotic feel, it’s best to use AI in moderation and be open with others about how you use it.
To give you additional peace of mind, McAfee+ can restore your online privacy and identity should you fall into an AI-assisted scam. With identity restoration experts and up to $2 million in identity theft coverage, you can feel better about navigating this new dimension in the online world.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
2ARTnews, “Artist Wins Photography Contest After Submitting AI-Generated Image, Then Forfeits Prize”
3Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”
4OpenAI, “OpenAI Charter”
The post Four Ways To Use AI Responsibly appeared first on McAfee Blog.
Artificial intelligence used to be reserved for the population’s most brilliant scientists and isolated in the world’s top laboratories. Now, AI is available to anyone with an internet connection. Tools like ChatGPT, Voice.ai, DALL-E, and others have brought AI into daily life, but sometimes the terms used to describe their capabilities and inner workings are anything but mainstream.
Here are 10 common terms you’ll likely to hear in the same sentence as your favorite AI tool, on the nightly news, or by the water cooler. Keep this AI dictionary handy to stay informed about this popular (and sometimes controversial) topic.
AI-generated content is any piece of written, audio, or visual media that was created partially or completely by an artificial intelligence-powered tool.
If someone uses AI to create something, it doesn’t automatically mean they cheated or irresponsibly cut corners. AI is often a great place to start when creating outlines, compiling thought-starters, or seeking a new way of looking at a problem.
When your question stumps an AI, it doesn’t always admit that it doesn’t know the answer. So, instead of not giving an answer, it’ll make one up that it thinks you want to hear. This made-up answer is known as an AI hallucination.
One real-world case of a costly AI hallucination occurred in New York where a lawyer used ChatGPT to write a brief. The brief seemed complete and cited its sources, but it turns out that none of the sources existed.1 It was all a figment of the AI’s “imagination.”
To understand the term black box, imagine the AI as a system of cogs, pulleys, and conveyer belts housed within a box. In a see-through box, you can see how the input is transformed into the final product; however, some AI are referred to as a black box. That means you don’t know how the AI arrived at its conclusions. The AI completely hides its reasoning process. A black box can be a problem if you’d like to doublecheck the AI’s work.
Deepfake is the manipulation of a photo, video, or audio clip to portray events that never happened. Often used for humorous social media skits and viral posts, unsavory characters are also leveraging deepfake to spread fake news reports or scam people.
For example, people are inserting politicians into unflattering poses and photo backgrounds. Sometimes the deepfake is intended to get a laugh, but other times the deepfake creator intends to spark rumors that could lead to dissent or tarnish the reputation of the photo subject. One tip to spot a deepfake image is to look at the hands and faces of people in the background. Deepfakes often add or subtract fingers or distort facial expressions.
AI-assisted audio impersonations – which are considered deepfakes – are also rising in believability. According to McAfee’s “Beware the Artificial Imposter” report, 25% of respondents globally said that a voice scam happened either to themselves or to someone they know. Seventy-seven percent of people who were targeted by a voice scam lost money as a result.
The closer an AI’s thinking process is to the human brain, the more accurate the AI is likely to be. Deep learning involves training an AI to reason and recall information like a human, meaning that the machine can identify patterns and make predictions.
Explainable AI – or white box – is the opposite of black box AI. An explainable AI model always shows its work and how it arrived at its conclusion. Explainable AI can boost your confidence in the final output because you can doublecheck what went into the answer.
Generative AI is the type of artificial intelligence that powers many of today’s mainstream AI tools, like ChatGPT, Bard, and Craiyon. Like a sponge, generative AI soaks up huge amounts of data and recalls it to inform every answer it creates.
Machine learning is integral to AI, because it lets the AI learn and continually improve. Without explicit instructions to do so, machine learning within AI allows the AI to get smarter the more it’s used.
People must not only use AI responsibly, but the people designing and programming AI must do so responsibly, too. Technologists must ensure that the data the AI depends on is accurate and free from bias. This diligence is necessary to confirm that the AI’s output is correct and without prejudice.
Sentient is an adjective that means someone or some thing is aware of feelings, sensations, and emotions. In futuristic movies depicting AI, the characters’ world goes off the rails when the robots become sentient, or when they “feel” human-like emotions. While it makes for great Hollywood drama, today’s AI is not sentient. It doesn’t empathize or understand the true meanings of happiness, excitement, sadness, or fear.
So, even if an AI composed a short story that is so beautiful it made you cry, the AI doesn’t know that what it created was touching. It was just fulfilling a prompt and used a pattern to determine which word to choose next.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
The post 10 Artificial Intelligence Buzzwords You Should Know appeared first on McAfee Blog.
It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me.
The most famous of these mainstream AI tools are ChatGPT, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work?
Here’s a simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools.
Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Today’s Gen AI’s evolved from the use of chatbots in the 1960s. Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.
Think of generative AI as a sponge that desperately wants to delight the users who ask it questions.
First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes worth of facts. The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.
From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes.
Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions. So, while the technology is certainly smart, it’s not exactly creative.
The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double-check your work!
One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist. This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.
Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. However, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.
The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgment and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content.
Technology can also help protect against voice cloning attacks. Tools like McAfee Deepfake Detector, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses.
The post What Is Generative AI and How Does It Work? appeared first on McAfee Blog.
Happy World Social Media Day! Today’s a day about celebrating the life-long friendships you’ve made thanks to social media. Social media was invented to help users meet new people with shared interests, stay in touch, and learn more about world. Facebook, Twitter, Instagram, Reddit, TikTok, LinkedIn, and the trailblazing MySpace have all certainly succeeded in those aims.
This is the first World Social Media Day where artificial intelligence (AI) joins the party. AI has existed in many forms for decades, but it’s only recently that AI-powered apps and tools are available in the pockets and homes of just about everyone. ChatGPT, Voice.ai, DALL-E, and others are certainly fun to play with and can even speed up your workday.
While scrolling through hilarious videos and commenting on your friends’ life milestones are practically national pastimes, some people are making it their pastime to fill our favorite social media feeds with AI-generated content. Not all of it is malicious, but some AI-generated social media posts are scams.
Here are some examples of common AI-generated content that you’re likely to encounter on social media.
Have you scrolled through your video feed and come across voices that sound exactly like the current and former presidents? And are they playing video games together? Comic impersonators can be hilariously accurate with their copycatting, but the voice track to this video is spot on. This series of videos, created by TikToker Voretecks, uses AI voice generation to mimic presidential voices and pit them against each other to bring joy to their viewers.1 In this case, AI-generated voices are mostly harmless, since the videos are in jest. Context clues make it obvious that the presidents didn’t gather to hunt rogue machines together.
AI voice generation turns nefarious when it’s meant to trick people into thinking or acting a certain way. For example, an AI voiceover made it look like a candidate for Chicago mayor said something inflammatory that he never said.2 Fake news is likely to skyrocket with the fierce 2024 election on the horizon. Social media sites, especially Twitter, are an effective avenue for political saboteurs to spread their lies far and wide to discredit their opponent.
Finally, while it might not appear on your social media feed, scammers can use what you post on social media to impersonate your voice. According to McAfee’s Beware the Artificial Imposters Report, a scammer requires only three seconds of audio to clone your voice. From there, the scammer may reach out to your loved ones with extremely realistic phone calls to steal money or sensitive personal information. The report also found that of the people who lost money to an AI voice scam, 36% said they lost between $500 and $3,000.
To keep your voice out of the hands of scammers, perhaps be more mindful of the videos or audio clips you post publicly. Also, consider having a secret safe word with your friends and family that would stump any would-be scammer.
Deepfake, or the alteration of an existing photo or video of a real person that shows them doing something that never happened, is another tactic used by social media comedians and fake news spreaders alike. In the case of the former, one company founded their entire business upon deepfake. The company is most famous for its deepfakes of Tom Cruise, though it’s evolved into impersonating other celebrities, generative AI research, and translation.3
When you see videos or images on social media that seem odd, look for a disclaimer – either on the post itself or in the poster’s bio – about whether the poster used deepfake technology to create the content. A responsible social media user will alert their audiences when the content they post is AI generated.
Again, deepfake and other AI-altered images become malicious when they cause social media viewers to think or act a certain way. Fake news outlets may portray a political candidate doing something embarrassing to sway voters. Or an AI-altered image of animals in need may tug at the heartstrings of social media users and cause them to donate to a fake fundraiser. Deepfake challenges the saying “seeing is believing.”
ChatGPT is everyone’s favorite creativity booster and taskmaster for any writing chore. It is also the new best friend of social media bot accounts. Present on just about every social media platform, bot accounts spread spam, fake news, and bolster follower numbers. Bot accounts used to be easy to spot because their posts were unoriginal and poorly written. Now, with the AI-assisted creativity and excellent sentence-level composition of ChatGPT, bot accounts are sounding a lot more realistic. And the humans managing those hundreds of bot accounts can now create content more quickly than if they were writing each post themselves.
In general, be wary when anyone you don’t know comments on one of your posts or reaches out to you via direct message. If someone says you’ve won a prize but you don’t remember ever entering a contest, ignore it.
With the advent of mainstream AI, everyone should approach every social media post with skepticism. Be on the lookout for anything that seems amiss or too fantastical to be true. And before you share a news item with your following, conduct your own background research to assert that it’s true.
To protect or restore your identity should you fall for any social media scams, you can trust McAfee+. McAfee+ monitors your identity and credit to help you catch suspicious activity early. Also, you can feel secure in the $1 million in identity theft coverage and identity restoration services.
Social media is a fun way to pass the time, keep up with your friends, and learn something new. Don’t be afraid of AI on social media. Instead, laugh at the parodies, ignore and report the fake news, and enjoy social media confidently!
1Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”
2The Hill, “The impending nightmare that AI poses for media, elections”
3Metaphysic, “Create generative AI video that looks real”
The post Be Mindful of These 3 AI Tricks on World Social Media Day appeared first on McAfee Blog.
What’s blockchain technology? The term gets bandied about often enough, but it doesn’t always get the explanation it deserves.
Understanding the basics of blockchain can help you understand several of the big changes that are taking place online. It’s the foundational technology that underpins cryptocurrency and NFTs (non-fungible tokens), yet it has several other emerging applications as well.
In all, gaining a sense of how blockchain technology works will give you a further sense as to how it may eventually shape the way you go about your day.
Blockchain technology holds great potential because of the unique, decentralized way it handles data—which marks the first step in understanding how it works.
An easy way to visualize how a blockchain works is with an old-fashioned ledger. Each ledger entry is a link in a “chain.” Within each chain is a unique identifier known as a hash and a block of data associated with it. Over time, chains get added, which updates the hash as new blocks of data are added to the chain.
A simplified example of a blockchain storing recipe instructions. The Previous Hash and Stuff (data) fields generate the Hash field. This Hash becomes part of the next record.
Yet one of the most important aspects of blockchain technology is this—it’s decentralized. Dozens, hundreds, thousands, or more participants in the blockchain track and validate the transactions associated with it.
Each blockchain entry gets validated through consensus, where individual participants on a blockchain network must all “agree” that the data in each entry is correct. Participants in the blockchain network can arrive at consensus through several models, yet commonly they use cryptographic calculations to validate an update to the chain.
In this way, blockchain technology removes the need for a central authority to oversee a transaction, such as a bank. Put simply, blockchain gets rid of the go-between. And it makes transactions more anonymous as a result.
Participants in a blockchain network receive a small amount of cryptocurrency per transaction as a reward for their efforts. Enter the notion of crypto mining, where some miners set up large-scale farms of powerful, specialized computers that participate in blockchain networks.
Blockchains come in public and private forms. Public is just as it sounds, where anyone can participate in the blockchain. They can read, write, or validate data in the blockchain. Private blockchains are invite-only in nature and can establish rules about who can alter the blockchain.
Many blockchain ledger entries record financial transactions associated with cryptocurrency. However, ledger entries can contain any type of data. One can just as easily store documents, images, log files, or other items in a blockchain. Even decentralized programs, also known as smart contracts, can be stored.
In all, there’s much more to blockchain technology than just cryptocurrency.
First and foremost, blockchain technology is at the heart of cryptocurrency. Wherever cryptocurrency is bought, spent, or exchanged, the blockchain is there to facilitate the transaction. However, we can point to several new and emerging applications as well, including:
Blockchain technology offers several benefits, yet it has its downsides as well.
Decentralization removes the need for third parties in transactions because the blockchain provides the verification and oversight for the transaction to go through. In the case of financial transactions, that removes the need for banks. In the sale of property, that removes the need for a title company.
However, if there is a conflict or issue between the parties, they have no central authority to manage its resolution. (See this story written by a BBC journalist about his quest to recover stolen crypto funds.)
Additionally, decentralization can afford parties anonymity, which can cover up illegal activities—thus making cryptocurrency is the coin of the realm for scammers and murky marketplaces on the dark web.
Blockchain technology is open, meaning that theoretically anyone with a specially equipped device can generate revenue as a miner in the blockchain economy. Yet the reality is that much of the technology is in the hands of the few. For starters, these mining devices are expensive. Secondly, it takes hundreds of these devices to mine effectively, which points to the advent of the industrial-sized mining farms mentioned above.
To put it all into perspective, one study estimated that “(t)he top 10% of [Bitcoin] miners control 90% and just 0.1% (about 50 miners) control close to 50% of mining capacity.”
Additionally, all that computing power comes at an additional cost—energy. It takes electricity to run those huge mining farms, and it takes yet more electricity to keep them cool. As a result, crypto mining can generate an outsized carbon footprint if the electricity is generated with fossil fuels.
Image and data courtesy of Digiconomist
Of note, the second-largest cryptocurrency, Ethereum has made great strides on the energy consumption front. It updated the way the cryptocurrency arrives at consensus in its blockchain and uses far less energy as a result. Estimates show that Ethereum’s carbon footprint decreased by about 99.992% from 11,016,000 to 870 metric tons of CO2.
As far as technology goes, we still live in the relatively early days of blockchain. And while much of its popular focus revolves around its role in cryptocurrencies like Bitcoin, the technology offers more than that. Of course, it remains to be seen which of its applications will take root.
Blockchain has its own barriers, though, particularly when it comes to security. Like any other connected technology, it finds itself the target of hacks and attacks. Billions of dollars in cryptocurrency have been stolen from individual users and exchanges over the years.
The security issue isn’t necessarily with the blockchain itself. That’s highly difficult to hack thanks to encryption and the decentralized nature of the blockchain. Instead, the networks they are on are subject to attack—such as interception attacks where bad actors extract information or cryptocurrency. Other attacks involve flooding the blockchain network with false identities that ultimately crash the system. And yet more exploit weaknesses in the security protocols used by platforms like cryptocurrency exchanges.
Then there’s the tried-and-true phishing attack, where scammers dupe victims into handing over their personal encryption keys. With a key, the scammer can empty digital wallets of their cryptocurrency or compromise a private blockchain network and that data in it.
Clearly, the future remains speculative as people and organizations explore the uses of blockchain technology. Without question, security will play a major role in its adoption.
Unless you’re dabbling in cryptocurrency yourself, blockchain will likely remain a behind-the-scenes technology. At least for the time being.
Yet it can still shape your day in some way. It might help bring fresher produce to your market. It might secure smart utilities and smart infrastructure in your city. And it might give your auto manufacturer a powerful tool for identifying and recalling a faulty part in your car.
Although barriers of security, energy consumption, and equity remain, it stands a good chance that blockchain technology will continue to change our lives. And understanding how it works can help you better understand those changes.
The post Blockchain Basics: What’s Blockchain Technology and How Might It Change Our Lives? appeared first on McAfee Blog.
Over the decades, Hollywood has depicted artificial intelligence (AI) in multiple unsettling ways. In their futuristic settings, the AI begins to think for itself, outsmarts the humans, and overthrows society. The resulting dark world is left in a constant barrage of storms – metaphorically and meteorologically. (It’s always so gloomy and rainy in those movies.)
AI has been a part of manufacturing, shipping, and other industries for several years now. But the emergence of mainstream AI in daily life is stirring debates about its use. Content, art, video, and voice generation tools can make you write like Shakespeare, look like Tom Cruise, or create digital masterpieces in the style of Van Gogh. While it starts out as fun and games, an overreliance or misuse of AI can quickly turn shortcuts into irresponsibly cut corners and pranks into malicious impersonations.
It’s imperative that everyone interact responsibly with mainstream AI tools like ChatGPT, Bard, Craiyon, and Voice.ai, among others, to avoid these three real dangers of AI that you’re most likely to encounter.
The cool thing about AI is it has advanced to the point where it does think for itself. It’s constantly learning and forming new patterns. The more questions you ask it, the more data it collects and the “smarter” it gets. However, when you ask ChatGPT a question it doesn’t know the answer to, it doesn’t admit that it doesn’t know. Instead, it’ll make up an answer like a precocious schoolchild. This phenomenon is known as an AI hallucination.
One prime example of an AI hallucination occurred in a New York courtroom. A lawyer presented a lengthy brief that cited multiple law cases to back his point. It turns out the lawyer used ChatGPT to write the entire brief and he didn’t fact check the AI’s work. ChatGPT fabricated its supporting citations, none of which existed.1
AI hallucinations could become a threat to society in that it could populate the internet with false information. Researchers and writers have a duty to thoroughly doublecheck any work they outsource to text generation tools like ChatGPT. When a trustworthy online source publishes content and asserts it as the unbiased truth, readers should be able to trust that the publisher isn’t leading them astray.
We all know that you can’t trust everything you read on the internet. Deepfake and AI-generated art deepen the mistrust. Now, you can’t trust everything you see on the internet.
Deepfake is the digital manipulation of a photo or video to portray an event that never happened or portray a person doing or saying something they never did or said. AI art creates new images using a compilation of published works on the internet to fulfill the prompt.
Deepfake and AI art become a danger to the public when people use them to supplement fake news reports. Individuals and organizations who feel strongly about their side of an issue may shunt integrity to the side to win new followers to their cause. Fake news is often incendiary and in extreme cases can cause unrest.
Before you share a “news” article with your social media following or shout about it to others, do some additional research to ensure its accuracy. Additionally, scrutinize the video or image accompanying the story. A deepfake gives itself away when facial expressions or hand gestures don’t look quite right. Also, the face may distort if the hands get too close to it. To spot AI art, think carefully about the context. Is it too fantastic or terrible to be true? Check out the shadows, shading, and the background setting for anomalies.
An emerging dangerous use of AI is cropping up in AI voice scams. Phishers have attempted to get people’s personal details and gain financially over the phone for decades. But now with the help of AI voice tools, their scams are entering a whole new dimension of believability.
With as little as three seconds of genuine audio, AI voice generators can mimic someone’s voice with up to 95% accuracy. While AI voice generators may add some humor to a comedy deepfake video, criminals are using the technology to seriously frighten people and scam them out of money at the same time. The criminal will impersonate someone using their voice and call the real person’s loved one, saying they’ve been robbed or sustained an accident. McAfee’s Beware the Artificial Imposter report discovered that 77% of people targeted by an AI voice scam lost money as a result. Seven percent of people lost as much as $5,000 to $15,000.
Google’s code of conduct states “Don’t be evil.”2 Because AI relies on input from humans, we have the power to make AI as benevolent or as malevolent as we are. There’s a certain amount of trust involved in the engineers who hold the future of the technology – and if Hollywood is to be believed, the fate of humanity – in their deft hands and brilliant minds.
“60 Minutes” likened AI’s influence on society on a tier with fire, agriculture, and electricity.3 Because AI never has to take a break, it can learn and teach itself new things every second of every day. It’s advancing quickly and some of the written and visual art it creates can result in some touching expressions of humanity. But AI doesn’t quite understand the emotion it portrays. It’s simply a game of making patterns. Is AI – especially its use in creative pursuits – dimming the spark of humanity? That remains to be seen.
When used responsibly and in moderation in daily life, it may make us more efficient and inspire us to think in new ways. Be on the lookout for the dangers of AI and use this amazing technology for good.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
2Alphabet, “Google Code of Conduct”
360 Minutes, “Artificial Intelligence Revolution”
The post The Dangers of Artificial Intelligence appeared first on McAfee Blog.
The dystopian 2020s, ’30s, and ’40s depicted in novels and movies written and produced decades ago blessedly seem very far off from the timeline of reality. Yes, we have refrigerators that suggest grocery lists, and we have devices in our pockets that control seemingly every function of our homes. But there aren’t giant robots roaming the streets bellowing orders.
Humans are still very much in control of society. To keep it that way, we must use the latest technological advancements for their main intended purpose: To boost convenience in our busy lives.
The future of technology is bright. With the right attitude, security tools, and a reminder every now and then to look up from our devices, humans will be able to enjoy everything the future of technology holds.
A look into the future of technology would be incomplete without touching on the applications and impact of artificial intelligence (AI) on everyday tasks. Platforms like ChatGPT , Voice.ai, and Craiyon have thrilled, shocked, and unnerved the world in equal measures. AI has already transformed work life, home life, and free time of everyday people everywhere.
According to McAfee’s Modern Love Research Report, 26% of people would use AI to aid in composing a love note. Plus, more than two-thirds of those surveyed couldn’t tell the difference between a love note written by AI and a human. AI can be a good tool to generate ideas, but replacing genuine human emotion with the words of a computer program could create a shaky foundation for a relationship.
The Center for AI Safety urges that humans must take an active role in using AI responsibly. Cybercriminals and unsavory online characters are already using it maliciously to gain financially and spread incendiary misinformation. For example, AI-generated voice imposters are scamming concerned family members and friends with heartfelt pleas for financial help with a voice that sounds just like their loved one. Voice scams are turning out to be fruitful for scammers: 77% of people polled who received a cloned voice scam lost money as a result.
Even people who aren’t intending mischief can cause a considerable amount when they use AI to cut corners. One lawyer’s testimony went awry when his research partner, ChatGPT, when rogue and completely made up its justification.1 This phenomenon is known as an AI hallucination. It occurs when ChatGPT or other similar AI content generation tool doesn’t know the answer to your question, so it fabricates sources and asserts you that it’s giving you the truth.
Overreliance on ChatGPT’s output and immediately trusting it as truth can lead to an internet rampant with fake news and false accounts. Keep in mind that using ChatGPT introduces risk in the content creation process. Use it responsibly.
Though it’s powered by AI and could fall under the AI section above, deepfake is exploding and deserves its own spotlight. Deepfake technology is the manipulation of videos to digitally transform one person’s appearance resemble someone else, usually a public figure. Deepfake videos are often accompanied by AI-altered voice tracks. Deepfake challenges the accuracy of the common saying, “Seeing is believing.” Now, it’s more difficult than ever to separate fact from fiction.
Not all deepfake uses are nefarious. Deepfake could become a valuable tool in special effects and editing for the gaming and film industries. Additionally, fugitive sketch artists could leverage deepfake to create ultra-realistic portraits of wanted criminals. If you decide to use deepfake to add some flair to your social media feed or portfolio, make sure to add a disclaimer that you altered reality with the technology.
Currently, it’s estimated that there are more than 15 billion connected devices in the world. A connected device is defined as anything that connects to the internet. In addition to smartphones, computers, and tablets, connected devices also extend to smart refrigerators, smart lightbulbs, smart TVs, virtual home assistants, smart thermostats, etc. By 2030, there may be as many as 29 billion connected devices.2
The growing number of connected devices can be attributed to our desire for convenience. The ability to remote start your car on a frigid morning from the comfort of your home would’ve been a dream in the 1990s. Checking your refrigerator’s contents from the grocery store eliminates the need for a pesky second trip to pick up the items you forgot the first time around.
The downfall of so many connected devices is that it presents crybercriminals literally billions of opportunities to steal people’s personally identifiable information. Each device is a window into your online life, so it’s essential to guard each device to keep cybercriminals away from your important personal details.
With the widespread adoption of email, then cellphones, and then social media in the ’80s, ’90s and early 2000s, respectively, people have integrated technology into their daily lives that better helps them connect with other people. More recent technological innovations seem to trend toward how to connect people to their other devices for a seamless digital life.
We shouldn’t ignore that the more devices and online accounts we manage, the more opportunities cybercriminals have to weasel their way into your digital life and put your personally identifiable information at risk. To protect your online privacy, devices, and identity, entrust your digital safety to McAfee+. McAfee+ includes $1 million in identity theft coverage, virtual private network (VPN), Personal Data Cleanup, and more.
The future isn’t a scary place. It’s a place of infinite technological possibilities! Explore them confidently with McAfee+ by your side.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
2Statista, “Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030”
The post The Future of Technology: AI, Deepfake, & Connected Devices appeared first on McAfee Blog.
Artificial intelligence: It’s society’s newest darling and newest villain. AI is the newest best friend to creatives, time-strapped people, and unfortunately, the newest sidekick of online scammers. AI platforms like ChatGPT, Craiyon, Voice.ai, and others are available to everyday people, meaning that the technology is creeping further into our daily lives. But is this mainstream AI for the better? Or for worse?
According to McAfee’s Modern Love Research Report, 27% of people who admitted that they planned to use AI on Valentine’s Day said it was to boost their confidence. For some people, pouring their heart onto the page is difficult and makes them feel vulnerable. If there’s a tool out there that lessens people’s fear of opening up, they should take advantage of it.
Just remember that honesty is the best policy. Tell your partner you employed the help of AI to express your feelings.
Sometimes you just don’t know how to start your next masterpiece, whether it’s a painting, short story, or business proposal. AI art and text generators are great brainstorming tools to get the creative juices flowing. The program may think of an approach you would never have considered.
Generative AI – the type of artificial intelligence technology behind many mainstream content generation platforms – isn’t new. In fact, scientists and engineers have been using it for decades to accelerate new materials and medical discoveries. For example, generative AI is central to inventing carbon capture materials that will be key to slowing the effects of global warming.1
Online tools fueled by generative AI can save you time too. For instance, AI can likely handle run-of-the-mill emails that you hardly have time to write between all your meetings.
What happens to genuine human connections as AI expands? The Modern Love Report discovered 49% of people would feel hurt if their partner used AI to write a love note. What they thought was written with true feeling was composed by a heartless computer program. ChatGPT composes its responses based on what’s published elsewhere on the internet. So, not only are its responses devoid of real human emotion, but the “emotion” it does portray is plagiarized.
Additionally, some artists perceive AI-generated components within digital artworks as cheapening the talent of human artists. The same with AI-written content is consistent with AI art: None of the art it generates is original. It takes inspiration and snippets from already-published images and mashes them together. The results can be visually striking (or nightmarish), but some argue that it belittles the human spirit.
AI hallucinations are also a problem in the authenticity and accuracy of AI-generated content. An AI hallucination occurs when the generative AI program doesn’t know the answer to a prompt. Instead of diligently researching the correct answer, the AI makes up the answer. This can lead to the proliferation of fake news or inaccurate reporting.
AI-generated content is expanding the repertoire and speed of online scammers. For example, phishing emails used to be easy to pick out of a crowd, because of their trademark typos, poor grammar and spelling, and laughably far-fetched stories. Now with ChatGPT, phishing emails are much more polished, since, at the sentence level, it writes smoothly and correctly. This makes it more difficult for even the most diligent readers to identify and avoid phishing attempts. Also, instead of spending time imagining and writing their fake backstories, phishers can offload that task to ChatGPT, which makes quick work of it.
Cybercriminals are working out the possibilities of leveraging ChatGPT to write new types of malware quickly. In four hours, one researcher gave ChatGPT minimal instructions and it wrote an undetectable malware program.2 This means that someone with little to no coding experience could theoretically create a powerful new strain of malicious software. The speed at which the researcher created the malware is also noteworthy. A cybercriminal could trial-and-error dozens of malware programs. The moment authorities detect and shut down one strain, the criminal could release the next soon after.
Deep fake technology and voice AI are expanding the nefarious repertoire of scammers. In one incident, a scammer cloned the voice of a teenager, which was so realistic it convinced the teenager’s own mother that her child was in danger. In actuality, the teen was completely safe.3 Voice AI applications like this one could add false legitimacy to the grandparent scam that has been around for a few years and other voice-based scams. According to McAfee’s Beware the Artificial Imposter report, 77% of people who were targeted by a voice cloning scam lost money as a result.
So, what do you think? Is your day-to-day life easier or more complicated thanks to AI? Should people aim to add it to their routines or stop relying on it so much?
The debate of AI’s place in the mainstream could go on and on. What’s undebatable is the need for protection against online threats that are becoming more powerful when augmented by AI. Here are a few general tips to avoid AI scams:
To cover all your bases, consider investing in McAfee+. McAfee+ is the all-in-one device, online privacy, and identity protection service. Live more confidently online with $1 million in identity remediation support, antivirus for unlimited devices, web protection, and more!
1IBM, “Climate change: IBM boosts materials discovery to improve carbon capture, separation and storage”
2Dark Reading, “Researcher Tricks ChatGPT Into Building Undetectable Steganography Malware”
3Business Insider, “A mother reportedly got a scam call saying her daughter had been kidnapped and she’d have to pay a ransom. The ‘kidnapper’ cloned the daughter’s voice using AI.”
The post Pros and Cons of AI in Daily Life appeared first on McAfee Blog.
Anyone can try ChatGPT for free. Yet that hasn’t stopped scammers from trying to cash in on it.
A rash of sketchy apps have cropped up in Apple’s App Store and Google Play. They pose as Chat GPT apps and try to fleece smartphone owners with phony subscriptions.
Yet you can spot them quickly when you know what to look for.
ChatGPT is an AI-driven chatbot service created by OpenAI. It lets you have uncannily human conversations with an AI that’s been programmed and fed with information over several generations of development. Provide it with an instruction or ask it a question, and the AI provides a detailed response.
Unsurprisingly, it has millions of people clamoring to use it. All it takes is a single prompt, and the prompts range far and wide.
People ask ChatGPT to help them write cover letters for job interviews, make travel recommendations, and explain complex scientific topics in plain language. One person highlighted how they used ChatGPT to run a tabletop game of Dungeons & Dragons for them. (If you’ve ever played, you know that’s a complex task that calls for a fair share of cleverness to keep the game entertaining.)
That’s just a handful of examples. As for myself, I’ve been using ChatGPT in the kitchen. My family and I have been digging into all kinds of new recipes thanks to its AI.
So, where do the scammers come in?
Scammers, have recently started posting copycat apps that look like they are powered by ChatGPT but aren’t. What’s more, they charge people a fee to use them—a prime example of fleeceware. OpenAI, the makers of ChatGPT, have just officially launched their iOS app for U.S. iPhone users and can be downloaded from the Apple App Store here. The official Android version is still yet to be released.
Fleeceware mimics a pre-existing service that’s free or low-cost and then charges an excessive fee to use it. Basically, it’s a copycat. An expensive one at that.
Fleeceware scammers often lure in their victims with “a free trial” that quickly converts into a subscription. However, with fleeceware, the terms of the subscription are steep. They might bill the user weekly, and at rates much higher than the going rate.
The result is that the fleeceware app might cost the victim a few bucks before they can cancel it. Worse yet, the victim might forget about the app entirely and run up hundreds of dollars before they realize what’s happening. Again, all for a simple app that’s free or practically free elsewhere.
What makes fleeceware so tricky to spot is that it can look legit at first glance. Plenty of smartphone apps offer subscriptions and other in-app purchases. In effect, fleeceware hides in plain sight among the thousands of other legitimate apps in the hopes you’ll download it.
With that, any app that charges a fee to use ChatGPT is fleeceware. ChatGPT offers basic functionality that anyone can use for free.
There is one case where you might pay a fee to use ChatGPT. It has its own subscription-level offering, ChatGPT Plus. With a subscription, ChatGPT responds more quickly to prompts and offers access during peak hours when free users might be shut out. That’s the one legitimate case where you might pay to use it.
In all, more and more people want to take ChatGPT for a spin. However, they might not realize it’s free. Scammers bank on that, and so we’ve seen a glut of phony ChatGPT apps that aim to install fleeceware onto people’s phones.
Read the description of the app and see what the developer is really offering. If the app charges you to use ChatGPT, it’s fleeceware. Anyone can use ChatGPT for free by setting up an account at its official website, https://chat.openai.com.
Reviews can tell you quite a bit about an app. They can also tell you the company that created it handles customer feedback.
In the case of fleeceware, you’ll likely see reviews that complain about sketchy payment terms. They might mention three-day trials that automatically convert to pricey monthly or weekly subscriptions. Moreover, they might describe how payment terms have changed and become more costly as a result.
In the case of legitimate apps, billing issues can arise from time to time, so see how the company handles complaints. Companies in good standing will typically provide links to customer service where people can resolve any issues they have. Company responses that are vague, or a lack of responses at all, should raise a red flag.
Scammers are smart. They’ll count on you to look at an overall good review of 4/5 stars or more and think that’s good enough. They know this, so they’ll pack their app landing page with dozens and dozens of phony and fawning reviews to make the app look legitimate. This tactic serves another purpose: it hides the true reviews written by actual users, which might be negative because the app is a scam.
Filter the app’s reviews for the one-star reviews and see what concerns people have. Do they mention overly aggressive billing practices, like the wickedly high prices and weekly billing cycles mentioned above? That might be a sign of fleeceware. Again, see if the app developer responded to the concerns and note the quality of the response. A legitimate company will honestly want to help a frustrated user and provide clear next steps to resolve the issue.
Google Play does its part to keep its virtual shelves free of malware-laden apps with a thorough submission process, as reported by Google. It further keeps things safer through its App Defense Alliance that shares intelligence across a network of partners, of which we’re a proud member. Further, users also have the option of running Play Protect to check apps for safety before they’re downloaded. Apple’s App Store has its own rigorous submission process for submitting apps. Likewise, Apple deletes hundreds of thousands of malicious apps from its store each year.
Third-party app stores might not have protections like these in place. Moreover, some of them might be fronts for illegal activity. Organized cybercrime organizations deliberately populate their third-party stores with apps that steal funds or personal information. Stick with the official app stores for the most complete protection possible.
Many fleeceware apps deliberately make it tough to cancel them. You’ll often see complaints about that in reviews, “I don’t see where I can cancel my subscription!” Deleting the app from your phone is not enough. Your subscription will remain active unless you cancel your payment method.
Luckily, your phone makes it easy to cancel subscriptions right from your settings menu. Canceling makes sure your credit or debit card won’t get charged when the next billing cycle comes up.
Be wary. Many fleeceware apps have aggressive billing cycles. Sometimes weekly.
ChatGPT is free. Anyone can use it by setting up a free account with OpenAI at https://chat.openai.com. Smartphone apps that charge you to use it are a scam.
You can download the official app, currently on iOS from the App Store
The post Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You appeared first on McAfee Blog.
Three seconds of audio is all it takes.
Cybercriminals have taken up newly forged artificial intelligence (AI) voice cloning tools and created a new breed of scam. With a small sample of audio, they can clone the voice of nearly anyone and send bogus messages by voicemail or voice messaging texts.
The aim, most often, is to trick people out of hundreds, if not thousands, of dollars.
Our recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had. Further, our research team at McAfee Labs discovered just how easily cybercriminals can pull off these scams.
With a small sample of a person’s voice and a script cooked up by a cybercriminal, these voice clone messages sound convincing, 70% of people in our worldwide survey said they weren’t confident they could tell the difference between a cloned voice and the real thing.
Cybercriminals create the kind of messages you might expect. Ones full of urgency and distress. They will use the cloning tool to impersonate a victim’s friend or family member with a voice message that says they’ve been in a car accident, or maybe that they’ve been robbed or injured. Either way, the bogus message often says they need money right away.
In all, the approach has proven quite effective so far. One in ten of people surveyed in our study said they received a message from an AI voice clone, and 77% of those victims said they lost money as a result.
Of the people who reported losing money, 36% said they lost between $500 and $3,000, while 7% got taken for sums anywhere between $5,000 and $15,000.
Of course, a clone needs an original. Cybercriminals have no difficulty sourcing original voice files to create their clones. Our study found that 53% of adults said they share their voice data online or in recorded notes at least once a week, and 49% do so up to ten times a week. All this activity generates voice recordings that could be subject to hacking, theft, or sharing (whether accidental or maliciously intentional).
Consider that people post videos of themselves on YouTube, share reels on social media, and perhaps even participate in podcasts. Even by accessing relatively public sources, cybercriminals can stockpile their arsenals with powerful source material.
Nearly half (45%) of our survey respondents said they would reply to a voicemail or voice message purporting to be from a friend or loved one in need of money, particularly if they thought the request had come from their partner or spouse (40%), mother (24%), or child (20%).
Further, they reported they’d likely respond to one of these messages if the message sender said:
These messages are the latest examples of targeted “spear phishing” attacks, which target specific people with specific information that seems just credible enough to act on it. Cybercriminals will often source this information from public social media profiles and other places online where people post about themselves, their families, their travels, and so on—and then attempt to cash in.
Payment methods vary, yet cybercriminals often ask for forms that are difficult to trace or recover, such as gift cards, wire transfers, reloadable debit cards, and even cryptocurrency. As always, requests for these kinds of payments raise a major red flag. It could very well be a scam.
In conjunction with this survey, researchers at McAfee Labs spent two weeks investigating the accessibility, ease of use, and efficacy of AI voice cloning tools. Readily, they found more than a dozen freely available on the internet.
These tools required only a basic level of experience and expertise to use. In one instance, just three seconds of audio was enough to produce a clone with an 85% voice match to the original (based on the benchmarking and assessment of McAfee security researchers). Further effort can increase the accuracy yet more. By training the data models, McAfee researchers achieved a 95% voice match based on just a small number of audio files.
McAfee’s researchers also discovered that that they could easily replicate accents from around the world, whether they were from the US, UK, India, or Australia. However, more distinctive voices were more challenging to copy, such as people who speak with an unusual pace, rhythm, or style. (Think of actor Christopher Walken.) Such voices require more effort to clone accurately and people with them are less likely to get cloned, at least with where the AI technology stands currently and putting comedic impersonations aside.
The research team stated that this is yet one more way that AI has lowered the barrier to entry for cybercriminals. Whether that’s using it to create malware, write deceptive messages in romance scams, or now with spear phishing attacks with voice cloning technology, it has never been easier to commit sophisticated looking, and sounding, cybercrime.
Likewise, the study also found that the rise of deepfakes and other disinformation created with AI tools has made people more skeptical of what they see online. Now, 32% of adults said their trust in social media is less than it’s ever been before.
A lot can come from a three-second audio clip.
With the advent of AI-driven voice cloning tools, cybercriminals have created a new form of scam. With arguably stunning accuracy, these tools can let cybercriminals nearly anyone. All they need is a short audio clip to kick off the cloning process.
Yet like all scams, you have ways you can protect yourself. A sharp sense of what seems right and wrong, along with a few straightforward security steps can help you and your loved ones from falling for these AI voice clone scams.
For a closer look at the survey data, along with a nation-by-nation breakdown, download a copy of our report here.
The survey was conducted between January 27th and February 1st, 2023 by Market Research Company MSI-ACI, with people aged 18 years and older invited to complete an online questionnaire. In total 7,000 people completed the survey from nine countries, including the United States, United Kingdom, France, Germany, Australia, India, Japan, Brazil, and Mexico.
The post Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam appeared first on McAfee Blog.
Ping, it’s a scammer!
The sound of an incoming email, text, or direct message has a way of getting your attention, so you take a look and see what’s up. It happens umpteen times a week, to the extent that it feels like the flow of your day. And scammers want to tap into that with sneaky phishing attacks that catch you off guard, all with the aim of stealing your personal information or bilking you out of your money.
Phishing attacks take several forms, where scammers masquerade as a legitimate company, financial institution, government agency, or even as someone you know. And they’ll come after you with messages that follow suit:
You can see why phishing attacks can be so effective. Messages like these have an urgency to them, and they seem like they’re legit, or they at least seem like they might deal with something you might care about. But of course they’re just a ruse. And some of them can look and sound rather convincing. Or at least convincing enough that you’ll not only give them a look, but that you’ll also give them a click too.
And that’s where the troubles start. Clicking the links or attachments sent in a phishing attack can lead to several potentially nasty things, such as:
However, plenty of phishing attacks are preventable. A mix of knowing what to look for and putting a few security steps in place can help you keep scammers at bay.
How you end up with one has a lot to do with it.
There’s a good chance you’ve already seen your share of phishing attempts on your phone. A text comes through with a brief message that one of your accounts needs attention, from an entirely unknown number. Along with it is a link that you can tap to follow up, which will send you to a malicious site. In some cases, the sender may skip the link and attempt to start a conversation with the aim of getting you to share your personal information or possibly fork over some payment with a gift card, money order, rechargeable debit card, or other form of payment that is difficult to trace and recover.
In the case of social media, you can expect that the attack will come from an imposter account that’s doing its best to pose as one of those legitimate businesses or organizations we talked about, or perhaps as a stranger or even someone you know. And the name and profile pic will do its best to play the part. If you click on the account that sent it, you may see that it was created only recently and that it has few to no followers, both of which are red flags. The attack is typically conversational, much like described above where the scammer attempts to pump you for personal info or money.
Attacks that come by direct messaging apps will work much in the same way. The scammer will set up a phony account, and where the app allows, a phony name and a phony profile pic to go along with it.
Email gets a little more complicated because emails can range anywhere from a few simple lines of text to a fully designed piece complete with images, formatting, and embedded links—much like a miniature web page.
In the past, email phishing attacks looked rather unsophisticated, rife with poor spelling and grammar, along with sloppy-looking layouts and images. That’s still sometimes the case today. Yet not always. Some phishing emails look like the real thing. Or nearly so.
Case in point, here’s a look at a phishing email masquerading as a McAfee email:
There’s a lot going on here. The scammers try to mimic the McAfee brand, yet don’t quite pull it off. Still, they do several things to try and be convincing.
Note the use of photography and the box shot of our software, paired with a prominent “act now” headline. It’s not the style of photography we use. Not that people would generally know this. However, some might have a passing thought like, “Huh. That doesn’t really look right for some reason.”
Beyond that, there are a few capitalization errors, some misplaced punctuation, plus the “order now” and “60% off” icons look rather slapped on. Also note the little dash of fear it throws in at the top of the email with mention of “There are (42) viruses on your computer.”
Taken all together, you can spot many email scams by taking a closer look, seeing what doesn’t feel right, and then trusting you gut. But that asks you to slow down, take a moment, and eyeball the email critically. Which people don’t always do. And that’s what scammers count on.
Similar ploys see scammers pose as legitimate companies and retailers, where they either ask you to log into a bogus account page to check statement or the status of an order. Some scammers offer links to “discount codes” that are instead links to landing pages designed steal your account login information as well. Similarly, they may simply send a malicious email attachment with the hope that you’ll click it.
In other forms of email phishing attacks, scammers may pose as a co-worker, business associate, vendor, or partner to get the victim to click a malicious link or download malicious software. These may include a link to a bogus invoice, spreadsheet, notetaking file, or word processing doc—just about anything that looks like it could be a piece of business correspondence. Instead, the link leads to a scam website that asks the victim “log in and download” the document, which steals account info as a result. Scammers may also include attachments to phishing emails that can install malware directly on the device, sometimes by infecting an otherwise everyday document with a malicious payload.
Email scammers may also pose as someone you know, whether by propping up an imposter email account or by outright hijacking an existing account. The attack follows the same playbook, using a link or an attachment to steal personal info, request funds, or install malware.
While you can’t outright stop phishing attacks from making their way to your computer or phone, you can do several things to keep yourself from falling to them. Further, you can do other things that may make it more difficult for scammers to reach you.
The content and the tone of the message can tell you quite a lot. Threatening messages or ones that play on fear are often phishing attacks, such angry messages from a so-called tax agent looking to collect back taxes. Other messages will lean heavy on urgency, like the phony McAfee phishing email above that says your license has expired today and that you have “(42)” viruses. And during the holidays, watch out for loud, overexcited messages about deep discounts on hard-to-find items. Instead of linking you off to a proper ecommerce site, they may link you to a scam shopping site that does nothing but steal your money and the account information you used to pay them. In all, phishing attacks indeed smell fishy. Slow down and review that message with a critical eye. It may tip you off to a scam.
Some phishing attacks can look rather convincing. So much so that you’ll want to follow up on them, like if your bank reports irregular activity on your account or a bill appears to be past due. In these cases, don’t click on the link in the message. Go straight to the website of the business or organization in question and access your account from there. Likewise, if you have questions, you can always reach out to their customer service number or web page.
When scammers contact you via social media, that in of itself can be a tell-tale sign of a scam. Consider, would an income tax collector contact you over social media? The answer there is no. For example, in the U.S. the Internal Revenue Service (IRS) makes it quite clear that they will never contact taxpayers via social media. (Let alone send angry, threatening messages.) In all, legitimate businesses and organizations don’t use social media as a channel for official communications. They have accepted ways they will, and will not, contact you. If you have any doubts about a communication you received, contact the business or organization in question directly and follow up with one of their customer service representatives.
Some phishing attacks involve attachments packed with malware like the ransomware, viruses, and keyloggers we mentioned earlier. If you receive a message with such an attachment, delete it. Even if you receive an email with an attachment from someone you know, follow up with that person. Particularly if you weren’t expecting an attachment from them. Scammers will often hijack or spoof email accounts of everyday people to spread malware.
On computers and laptops, you can hover your cursor over links without clicking on them to see the web address. Take a close look at the addresses the message is using. If it’s an email, look at the email address. Maybe the address doesn’t match the company or organization at all. Or maybe it looks like it almost does, yet it adds a few letters or words to the name. This marks yet another sign that you may have a phishing attack on your hands. Scammers also use the common tactic of a link shortener, which creates links that almost look like strings of indecipherable text. These shortened links mask the true address, which may indeed be a link to scam site. Delete the message. If possible, report it. Many social media platforms and messaging apps have built-in controls for reporting suspicious accounts and messages.
On social media and messaging platforms, stick to following, friending, and messaging people who you really know. As for those people who contact you out of the blue, be suspicious. Sad to say, they’re often scammers canvassing these platforms for victims. Better yet, where you can, set your profile to private, which makes it more difficult for scammers select and stalk you for an attack.
How’d that scammer get your phone number or email address anyway? Chances are, they pulled that information off a data broker site. Data brokers buy, collect, and sell detailed personal information, which they compile from several public and private sources, such as local, state, and federal records, plus third parties like supermarket shopper’s cards and mobile apps that share and sell user data. Moreover, they’ll sell it to anyone who pays for it, including people who’ll use that information for scams. You can help reduce those scam texts and calls by removing your information from those sites. Our Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info.
Online protection software can protect you in several ways. First, it can offer safe browsing features that can identify malicious links and downloads, which can help prevent clicking them. Further, it can steer you away from dangerous websites and block malware and phishing sites if you accidentally click on a malicious link. And overall, strong virus and malware protection can further block any attacks on your devices. Be sure to protect your smartphones in addition to your computers and laptops as well, particularly given all the sensitive things we do on them, like banking, shopping, and booking rides and travel.
Once phishing attacks were largely the domain of bogus emails, yet now they’ve spread to texts, social media, and messaging apps—anywhere a scammer can send a fraudulent message while posing as a reputable source.
Scammers count on you taking the bait, the immediate feelings of fear or concern that there’s a problem with your taxes or one of your accounts. They also prey on scarcity, like during the holidays where people search for great deals on gifts and have plenty of packages on the move. With a critical eye, you can often spot those scams. Sometimes, a pause and a little thought is all it takes. And in the cases where a particularly cagey attack makes its way through, online protection software can warn you that the link you’re about to click is indeed a trap.
Taken all together, you have plenty of ways you can beat scammers at their game.
The post How to Avoid Phishing Attacks on Your Smartphones and Computers appeared first on McAfee Blog.
There are plenty of phish in the sea.
Millions of bogus phishing emails land in millions of inboxes each day with one purpose in mind—to rip off the recipient. Whether they’re out to crack your bank account, steal personal information, or both, you can learn how to spot phishing emails and keep yourself safe.
And some of today’s phishing emails are indeed getting tougher to spot.
They seem like they come from companies you know and trust, like your bank, your credit card company, or services like Netflix, PayPal, and Amazon. And some of them look convincing. The writing and the layout are crisp, and the overall presentation looks professional. Yet still, there’s still something off about them.
And there’s certainly something wrong with that email. It was written by a scammer. Phishing emails employ a bait-and-hook tactic, where an urgent or enticing message is the bait and malware or a link to a phony login page is the hook.
Once the hook gets set, several things might happen. That phony login page may steal account and personal information. Or that malware might install keylogging software that steals information, viruses that open a back door through which data can get hijacked, or ransomware that holds a device and its data hostage until a fee is paid.
Again, you can sidestep these attacks if you know how to spot them. There are signs.
Let’s look at how prolific these attacks are, pick apart a few examples, and then break down the things you should look for.
<h2>Phishing attack statistics—the millions of attempts made each year.
In the U.S. alone, more than 300,000 victims reported a phishing attack to the FBI in 2022. Phishing attacks topped the list of reported complaints, roughly six times greater than the second top offender, personal data breaches. The actual figure is undoubtedly higher, given that not all attacks get reported.
Looking at phishing attacks worldwide, one study suggests that more than 255 million phishing attempts were made in the second half of 2022 alone. That marks a 61% increase over the previous year. Another study concluded that 1 in every 99 mails sent contained a phishing attack.
Yet scammers won’t always cast such a wide net. Statistics point to a rise in targeted spear phishing, where the attacker goes after a specific person. They will often target people at businesses who have the authority to transfer funds or make payments. Other targets include people who have access to sensitive information like passwords, proprietary data, and account information.
As such, the price of these attacks can get costly. In 2022, the FBI received 21,832 complaints from businesses that said they fell victim to a spear phishing attack. The adjusted losses were over $2.7 billion—an average cost of $123,671 per attack.
So while exacting phishing attack statistics remain somewhat elusive, there’s no question that phishing attacks are prolific. And costly.
<h2>What does a phishing attack look like?
Nearly every phishing attack sends an urgent message. One designed to get you to act.
Some examples …
When set within a nice design and paired some official-looking logos, it’s easy to see why plenty of people click the link or attachment that comes with messages like these.
And that’s the tricky thing with phishing attacks. Scammers have leveled up their game in recent years. Their phishing emails can look convincing. Not long ago, you could point to misspellings, lousy grammar, poor design, and logos that looked stretched or that used the wrong colors. Poorly executed phishing attacks like that still make their way into the world. However, it’s increasingly common to see far more sophisticated attacks today. Attacks that appear like a genuine message or notice.
Case in point:
Say you got an email that said your PayPal account had an issue. Would you type your account information here if you found yourself on this page? If so, you would have handed over your information to a scammer.
We took the screenshot above as part of following a phishing attack to its end—without entering any legitimate info, of course. In fact, we entered a garbage email address and password, and it still let us in. That’s because the scammers were after other information, as you’ll soon see.
As we dug into the site more deeply, it looked pretty spot on. The design mirrored PayPal’s style, and the footer links appeared official enough. Yet then we looked more closely.
Note the subtle errors, like “card informations” and “Configuration of my activity.” While companies make grammatical errors on occasion, spotting them in an interface should hoist a big red flag. Plus, the site asks for credit card information very early in the process. All suspicious.
Here’s where the attackers really got bold.
They ask for bank “informations,” which not only includes routing and account numbers, but they ask for the account password too. As said, bold. And entirely bogus.
Taken all together, the subtle errors and the bald-faced grab for exacting account information clearly mark this as a scam.
Let’s take a few steps back, though. Who sent the phishing email that directed us to this malicious site? None other than “paypal at inc dot-com.”
Clearly, that’s a phony email. And typical of a phishing attack where an attacker shoehorns a familiar name into an unassociated email address, in this case “inc dot-com.” Attackers may also gin up phony addresses that mimic official addresses, like “paypalcustsv dot-com.” Anything to trick you.
Likewise, the malicious site that the phishing email sent us to used a spoofed address as well. It had no official association with PayPal at all—which is proof positive of a phishing attack.
Note that companies only send emails from their official domain names, just as their sites only use their official domain names. Several companies and organizations will list those official domains on their websites to help curb phishing attacks.
For example, PayPal has a page that clearly states how it will and will not contact you. At McAfee, we have an entire page dedicated to preventing phishing attacks, which also lists the official email addresses we use.
Not every scammer is so sophisticated, at least in the way that they design their phishing emails. We can point to a few phishing emails that posed as legitimate communication from McAfee as examples.
There’s a lot going on in this first email example. The scammers try to mimic the McAfee brand, yet don’t pull it off. Still, they do several things to try to act convincing.
Note the use of photography and the box shot of our software, paired with a prominent “act now” headline. It’s not the style of photography we use. Not that people would generally know this. However, some might have a passing thought like, “Huh. That doesn’t really look like what McAfee usually sends me.”
Beyond that, there are a few capitalization errors, some misplaced punctuation, and the “order now” and “60% off” icons look rather slapped on. Also note the little dash of fear it throws in with a mention of “There are (42) viruses on your computer …”
Taken all together, someone can readily spot that this is a scam with a closer look.
This next ad falls into the less sophisticated category. It’s practically all text and goes heavy on the red ink. Once again, it hosts plenty of capitalization errors, with a few gaffes in grammar as well. In all, it doesn’t read smoothly. Nor is it easy on the eye, as a proper email about your account should be.
What sets this example apart is the “advertisement” disclaimer below, which tries to lend the attack some legitimacy. Also note the phony “unsubscribe” link, plus the (scratched out) mailing address and phone, which all try to do the same.
This last example doesn’t get our font right, and the trademark symbol is awkwardly placed. The usual grammar and capitalization errors crop up again, yet this piece of phishing takes a slightly different approach.
The scammers placed a little timer at the bottom of the email. That adds a degree of scarcity. They want you to think that you have about half an hour before you are unable to register for protection. That’s bogus, of course.
Seeing any recurring themes? There are a few for sure. With these examples in mind, get into the details—how you can spot phishing attacks and how you can avoid them altogether.
Just as we saw, some phishing attacks indeed appear fishy from the start. Yet sometimes it takes a bit of time and a particularly critical eye to spot.
And that’s what scammers count on. They hope that you’re moving quickly or otherwise a little preoccupied when you’re going through your email or messages. Distracted enough so that you might not pause to think, is this message really legit?
One of the best ways to beat scammers is to take a moment to scrutinize that message while keeping the following in mind …
Fear. That’s a big one. Maybe it’s an angry-sounding email from a government agency saying that you owe back taxes. Or maybe it’s another from a family member asking for money because there’s an emergency. Either way, scammers will lean heavily on fear as a motivator.
If you receive such a message, think twice. Consider if it’s genuine. For instance, consider that tax email example. In the U.S., the Internal Revenue Service (IRS) has specific guidelines as to how and when they will contact you. As a rule, they will likely contact you via physical mail delivered by the U.S. Postal Service. (They won’t call or apply pressure tactics—only scammers do that.) Likewise, other nations will have similar standards as well.
Scammers also love urgency. Phishing attacks begin by stirring up your emotions and getting you to act quickly. Scammers might use threats or overly excitable language to create that sense of urgency, both of which are clear signs of a potential scam.
Granted, legitimate businesses and organizations might reach out to notify you of a late payment or possible illicit activity on one of your accounts. Yet they’ll take a far more professional and even-handed tone than a scammer would. For example, it’s highly unlikely that your local electric utility will angrily shut off your service if you don’t pay your past due bill immediately.
Gift cards, cryptocurrency, money orders—these forms of payment are another sign that you might be looking at a phishing attack. Scammers prefer these methods of payment because they’re difficult to trace. Additionally, consumers have little or no way to recover lost funds from these payment methods.
Legitimate businesses and organizations won’t ask for payments in those forms. If you get a message asking for payment in one of those forms, you can bet it’s a scam.
Here’s another way you can spot a phishing attack. Take a close look at the addresses the message is using. If it’s an email, look at the email address. Maybe the address doesn’t match the company or organization at all. Or maybe it does somewhat, yet it adds a few letters or words to the name. This marks yet another sign that you might have a phishing attack on your hands.
Likewise, if the message contains a web link, closely examine that as well. If the name looks at all unfamiliar or altered from the way you’ve seen it before, that might also mean you’re looking at a phishing attempt.
Online protection software can protect you from phishing attacks in several ways.
For starters, it offers web protection that warns you when links lead to malicious websites, such as the ones used in phishing attacks. In the same way, online protection software can warn you about malicious downloads and email attachments so that you don’t end up with malware on your device. And, if the unfortunate does happen, antivirus can block and remove malware.
Online protection software like ours can also address the root of the problem. Scammers must get your email address from somewhere. Often, they get it from online data brokers, sites that gather and sell personal information to any buyer—scammers included.
Data brokers source this information from public records and third parties alike that they sell in bulk, providing scammers with massive mailing lists that can target thousands of potential victims. You can remove your personal info from some of the riskiest data broker sites with our Personal Data Cleanup, which can lower your exposure to scammers by keeping your email address out of their hands.
In all, phishing emails have telltale signs, some more difficult to see than others. Yet you can spot them when you know what to look for and take the time to look for them. With these attacks so prevalent and on the rise, looking at your email with a critical eye is a must today.
The post How to Spot Phishing Emails and Scams appeared first on McAfee Blog.
Artificial intelligence (AI) is making its way from high-tech labs and Hollywood plots into the hands of the general population. ChatGPT, the text generation tool, hardly needs an introduction and AI art generators (like Midjourney and DALL-E) are hot on its heels in popularity. Inputting nonsensical prompts and receiving ridiculous art clips in return is a fun way to spend an afternoon.
However, while you’re using AI art generators for a laugh, cybercriminals are using the technology to trick people into believing sensationalist fake news, catfish dating profiles, and damaging impersonations. Sophisticated AI-generated art can be difficult to spot, but here are a few signs that you may be viewing a dubious image or engaging with a criminal behind an AI-generated profile.
To better understand the cyberthreats posed by each, here are some quick definitions:
AI art and deepfake aren’t technologies found on the dark web. Anyone can download an AI art or deepfake app, such as FaceStealer and Fleeceware. Because the technology isn’t illegal and it has many innocent uses, it’s difficult to regulate.
How Do People Use AI Art Maliciously?
It’s perfectly innocent to use AI art to create a cover photo for your social media profile or to pair it with a blog post. However, it’s best to be transparent with your audience and include a disclaimer or caption saying that it’s not original artwork. AI art turns malicious when people use images to intentionally trick others and gain financially from the trickery.
Catfish may use deepfake profile pictures and videos to convince their targets that they’re genuinely looking for love. Revealing their real face and identity could put a criminal catfish at risk of discovery, so they either use someone else’s pictures or deepfake an entire library of pictures.
Fake news propagators may also enlist the help of AI art or a deepfake to add “credibility” to their conspiracy theories. When they pair their sensationalist headlines with a photo that, at quick glance, proves its legitimacy, people may be more likely to share and spread the story. Fake news is damaging to society because of the extreme negative emotions they can generate in huge crowds. The resulting hysteria or outrage can lead to violence in some cases.
Finally, some criminals may use deepfake to trick face ID and gain entry to sensitive online accounts. To prevent someone from deepfaking their way into your accounts, protect your accounts with multifactor authentication. That means that more than one method of identification is necessary to open the account. These methods can be one-time codes sent to your cellphone, passwords, answers to security questions, or fingerprint ID in addition to face ID.
Before you start an online relationship or share an apparent news story on social media, scrutinize images using these three tips to pick out malicious AI-generated art and deepfake.
Fake images usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Phishers are notorious for their poor writing skills. AI-generated text is more difficult to detect because its grammar and spelling are often correct; however, the sentences may seem choppy.
Does the image seem too bizarre to be real? Too good to be true? Extend this generation’s rule of thumb of “Don’t believe everything you read on the internet” to include “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, at least one other site will report on the event.
AI technology often generates a finger or two too many on hands, and a deepfake creates eyes that may have a soulless or dead look to them. Also, there may be shadows in places where they wouldn’t be natural, and the skin tone may look uneven. In deepfaked videos, the voice and facial expressions may not exactly line up, making the subject look robotic and stiff.
Fake images are tough to spot, and they’ll likely get more realistic the more the technology improves. Awareness of emerging AI threats better prepares you to take control of your online life. There are quizzes online that compare deepfake and AI art with genuine people and artworks created by humans. When you have a spare ten minutes, consider taking a quiz and recognizing your mistakes to identify malicious fake art in the future.
To give you more confidence in the security of your online life, partner with McAfee. McAfee+ Ultimate is the all-in-one privacy, identity, and device security service. Protect up to six members of your family with the family plan, and receive up to $2 million in identity theft coverage. Partner with McAfee to stop any threats that sneak under your watchful eye.
The post How to Spot Fake Art and Deepfakes appeared first on McAfee Blog.