With the rise of artificial intelligence (AI) and machine learning, concerns about the privacy of personal data have reached an all-time high. Generative AI is a type of AI that can generate new data from existing data, such as images, videos, and text. This technology can be used for a variety of purposes, from facial recognition to creating “deepfakes” and manipulating public opinion. As a result, it’s important to be aware of the potential risks that generative AI poses to your privacy.
In this blog post, we’ll discuss how to protect your privacy from generative AI.
Generative AI is a type of AI that uses existing data to generate new data. It’s usually used for things like facial recognition, speech recognition, and image and video generation. This technology can be used for both good and bad purposes, so it’s important to understand how it works and the potential risks it poses to your privacy.
Generative AI can be used to create deepfakes, which are fake images or videos that are generated using existing data. This technology can be used for malicious purposes, such as manipulating public opinion, identity theft, and spreading false information. It’s important to be aware of the potential risks that generative AI poses to your privacy.
Generative AI uses existing data to generate new data, so it’s important to be aware of what data you’re sharing online. Be sure to only share data that you’re comfortable with and be sure to use strong passwords and two-factor authentication whenever possible.
There are a number of privacy-focused tools available that can help protect your data from generative AI. These include tools like privacy-focused browsers, VPNs, and encryption tools. It’s important to understand how these tools work and how they can help protect your data.
It’s important to stay up-to-date on the latest developments in generative AI and privacy. Follow trusted news sources and keep an eye out for changes in the law that could affect your privacy.
By following these tips, you can help protect your privacy from generative AI. It’s important to be aware of the potential risks that this technology poses and to take steps to protect yourself and your data.
Of course, the most important step is to be aware and informed. Research and organizations that are using generative AI and make sure you understand how they use your data. Be sure to read the terms and conditions of any contracts you sign and be aware of any third parties that may have access to your data. Additionally, be sure to look out for notifications of changes in privacy policies and take the time to understand any changes that could affect you.
Finally, make sure to regularly check your accounts and reports to make sure that your data is not being used without your consent. You can also take the extra step of making use of the security and privacy features available on your device. Taking the time to understand which settings are available, as well as what data is being collected and used, can help you protect your privacy and keep your data safe.
This blog post was co-written with artificial intelligence (AI) as a tool to supplement, enhance, and make suggestions. While AI may assist in the creative and editing process, the thoughts, ideas, opinions, and the finished product are entirely human and original to their author. We strive to ensure accuracy and relevance, but please be aware that AI-generated content may not always fully represent the intent or expertise of human-authored material.
The post How to Protect Your Privacy From Generative AI appeared first on McAfee Blog.
Just when they need financial security the most, job seekers face another challenge—getting ripped off by job scams.
Scammers will capitalize on any opportunity to fleece a victim, like the holidays with ecommerce scams and tax time with IRS scams. Now, with surging employment figures, scammers have turned to job scams that harvest money and personal information from job seekers.
In some ways, the tactics bear resemblance to online dating and romance scammers who hide behind a phony profile and tell their victims a story they want to hear, namely that someone loves them. With job scams, they take on the persona of a recruiter and lure their victims with what seems like an outstanding job offer. Of course, there’s no job. It’s a scam.
These attacks have gained a degree of sophistication that they once lacked. Years prior, scammers relied on spammy emails and texts to share their bogus job offers. Now, they’re using phony profiles on social media platforms to target victims.
Social media platforms have several mechanisms in place to identity and delete the phony profiles that scammers use for these attacks. Of note, LinkedIn’s latest community report cited the removal of more than 21 million fake accounts in the first half of 2022:
Likewise, Facebook took action on 1.5 billion fake accounts in Q3 of 2022 alone, with more than 99% of them acted on before users reported them.
Still, some scammers make their way through.
As Steve Grobman, our senior vice president and chief technology officer, was quoted in an article for CNET, the continued shift to remote work, along with remote hiring, has also made it easier for online job scams to flourish. And the figures bear that out.
In 2021, the FTC called out $209 million in reported losses due to job scams. In just the first three quarters of 2022, reported job scam losses had already reached $250 million. While year-end figures have yet to be posted, the final tally for 2022 could end up well over $300 million, a 50% uptick. And the median loss per victim? Right around $2,000 each.
While the promise of work or a job offer make these scams unique, the scammers behind them want the same old things—your money, along with your personal information so that they can use it to cause yet more harm. The moment any so-called job offer asks for any of those, a red flag should immediately go up.
It’s possibly a scam if:
In the hands of a scammer, your SSN or tax ID is the master key to your identity. With it, they can open up bank cards, lines of credit, apply for insurance benefits, collect benefits and tax returns, or even commit crimes, all in your name. Needless to say, scammers will ask for it, perhaps under the guise of background check or for payroll purposes. The only time you should provide your SSN or tax ID is when you know that you have accepted a legitimate job with a legitimate company, and through a secure document signing service, never via email, text, or over the phone.
Another trick scammers rely on is asking for bank account information so that they can wire payment to you. As with the SSN above, closely guard this information and treat it in exactly the same way. Don’t give it out unless you actually have a legitimate job with a legitimate company.
Some scammers will take a different route. They’ll promise employment, but first you’ll need to pay them for training, onboarding, or equipment before you can start work. Legitimate companies won’t make these kinds of requests.
Aside from the types of information they ask for, the way they ask for your information offers other clues that you might be mixed up in a scam. Look out for the following as well:
You can sniff out many online scams with the “too good to be true” test. Scammers often make big promises during the holidays with low-priced offers for hard-to-get holiday gifts and then simply don’t deliver. It’s the same with job scams. The high pay, the low hours, and even the offer of things like a laptop and other perks, these are signs that a job offer might be a scam. Moreover, when pressed for details about this seemingly fantastic job opportunity, scammers may balk. Or they may come back with incomplete or inconsistent replies because the job doesn’t exist at all.
Job scammers hide behind their screens. They use the anonymity of the internet to their advantage. Job scammers likewise create phony profiles on networking and social media websites, which means they won’t agree to a video chat or call, which are commonly used in legitimate recruiting today. If your job offer doesn’t involve some sort of face-to-face communication, that’s an indication it may be a scam.
Scammers now have an additional tool reel in their victims—AI chatbots like Chat GPT, which can generate email correspondence, chats, LinkedIn profiles, and other content in seconds so they can bilk victims on a huge scale. However, AI has its limits. Right now, it tends to use shorter sentences in a way that seems like it’s simply spitting out information. There’s little story or substance to the content it creates. That may be a sign of a scam. Likewise, even without AI, you may spot a recruiter using technical or job-related terms in an unusual ways, as if they’re unfamiliar with the work they’re hiring for. That’s another potential sign.
Scammers love a quick conversion. Yet job seekers today know that interview processes are typically long and involved, often relying on several rounds of interviews and loops. If a job offer comes along without the usual rigor and the recruiter is asking for personal information practically right away, that’s another near-certain sign of a scam.
This is another red flag. Legitimate businesses stick to platforms associated with networking for business purposes, typically not networking for families, friends, and interests. Why do scammers use sites like Facebook anyway? They’re a gold mine of information. By trolling public profiles, they have access to years of posts and armloads of personal information on thousands of people, which they can use to target their attacks. This is another good reason to set your social media profiles on platforms like Facebook, Instagram, and other friend-oriented sites to private so that scammers of all kinds, not just job scammers, can’t use your information against you.
As a job hunter you know, getting the right job requires some research. You look up the company, dig into their history—the work they do, how long they’ve been at it, where their locations are, and maybe even read some reviews provided by current or former employees. When it comes to job offers that come out of the blue, it calls for taking that research a step further.
After all, is that business really a business, or is it really a scam?
In the U.S., you have several resources that can help you answer that question. The Better Business Bureau (BBB) offers a searchable listing of businesses in the U.S., along with a brief profile, a rating, and even a list of complaints (and company responses) waged against them. Spending some time here can quickly shed light on the legitimacy of a company.
Also in the U.S., you can visit the website of your state’s Secretary of State and search for the business in question, where you can find when it was founded, if it’s still active, or if it exists at all. For businesses based in a state other than your own, you can visit that state’s Secretary of State website for information. For a state-by-state list of Secretaries of State, you can visit the Secretary of State Corporate Search page here.
For a listing of businesses with international locations, organizations like S&P Global Ratings and the Dun and Bradstreet Corporation can provide background information, which may require signing up for an account.
Given the way rely so heavily on the internet to get things done and simply enjoy our day, comprehensive online protection software that looks out for your identity, privacy, and devices is a must. Specific to job scams, it can help you in several ways, these being just a few:
Job searches are loaded with emotion—excitement and hopefulness, sometimes urgency and frustration as well. Scammers will always lean into these emotions and hope to catch you off your guard. If there’s a common thread across all kinds of online scams, that’s it. Emotion.
A combination of a cool head and some precautionary measures that protect you and your devices can make for a much safer job-hunting experience, and a safer, more private life online too.
Editor’s Note:
Job scams are a crime. If you think that you or someone you know has fallen victim to one, report it to your authorities and appropriate government agencies. In the case of identity theft or loss of personal information, our knowledge base article on identity theft offers suggestions for the specific steps you can take in specific countries, along with helpful links for local authorities that you can turn to for reporting and assistance.
The post Job Scams—How to Tell if that Online Job Offer is Fake appeared first on McAfee Blog.
It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me.
The most famous of these mainstream AI tools are ChatGPT, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work?
Here’s a simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools.
Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Today’s Gen AI’s evolved from the use of chatbots in the 1960s. Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.
Think of generative AI as a sponge that desperately wants to delight the users who ask it questions.
First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes worth of facts. The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.
From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes.
Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions. So, while the technology is certainly smart, it’s not exactly creative.
The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double-check your work!
One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist. This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.
Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. However, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.
The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgment and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content.
Technology can also help protect against voice cloning attacks. Tools like McAfee Deepfake Detector, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses.
The post What Is Generative AI and How Does It Work? appeared first on McAfee Blog.
Happy World Social Media Day! Today’s a day about celebrating the life-long friendships you’ve made thanks to social media. Social media was invented to help users meet new people with shared interests, stay in touch, and learn more about world. Facebook, Twitter, Instagram, Reddit, TikTok, LinkedIn, and the trailblazing MySpace have all certainly succeeded in those aims.
This is the first World Social Media Day where artificial intelligence (AI) joins the party. AI has existed in many forms for decades, but it’s only recently that AI-powered apps and tools are available in the pockets and homes of just about everyone. ChatGPT, Voice.ai, DALL-E, and others are certainly fun to play with and can even speed up your workday.
While scrolling through hilarious videos and commenting on your friends’ life milestones are practically national pastimes, some people are making it their pastime to fill our favorite social media feeds with AI-generated content. Not all of it is malicious, but some AI-generated social media posts are scams.
Here are some examples of common AI-generated content that you’re likely to encounter on social media.
Have you scrolled through your video feed and come across voices that sound exactly like the current and former presidents? And are they playing video games together? Comic impersonators can be hilariously accurate with their copycatting, but the voice track to this video is spot on. This series of videos, created by TikToker Voretecks, uses AI voice generation to mimic presidential voices and pit them against each other to bring joy to their viewers.1 In this case, AI-generated voices are mostly harmless, since the videos are in jest. Context clues make it obvious that the presidents didn’t gather to hunt rogue machines together.
AI voice generation turns nefarious when it’s meant to trick people into thinking or acting a certain way. For example, an AI voiceover made it look like a candidate for Chicago mayor said something inflammatory that he never said.2 Fake news is likely to skyrocket with the fierce 2024 election on the horizon. Social media sites, especially Twitter, are an effective avenue for political saboteurs to spread their lies far and wide to discredit their opponent.
Finally, while it might not appear on your social media feed, scammers can use what you post on social media to impersonate your voice. According to McAfee’s Beware the Artificial Imposters Report, a scammer requires only three seconds of audio to clone your voice. From there, the scammer may reach out to your loved ones with extremely realistic phone calls to steal money or sensitive personal information. The report also found that of the people who lost money to an AI voice scam, 36% said they lost between $500 and $3,000.
To keep your voice out of the hands of scammers, perhaps be more mindful of the videos or audio clips you post publicly. Also, consider having a secret safe word with your friends and family that would stump any would-be scammer.
Deepfake, or the alteration of an existing photo or video of a real person that shows them doing something that never happened, is another tactic used by social media comedians and fake news spreaders alike. In the case of the former, one company founded their entire business upon deepfake. The company is most famous for its deepfakes of Tom Cruise, though it’s evolved into impersonating other celebrities, generative AI research, and translation.3
When you see videos or images on social media that seem odd, look for a disclaimer – either on the post itself or in the poster’s bio – about whether the poster used deepfake technology to create the content. A responsible social media user will alert their audiences when the content they post is AI generated.
Again, deepfake and other AI-altered images become malicious when they cause social media viewers to think or act a certain way. Fake news outlets may portray a political candidate doing something embarrassing to sway voters. Or an AI-altered image of animals in need may tug at the heartstrings of social media users and cause them to donate to a fake fundraiser. Deepfake challenges the saying “seeing is believing.”
ChatGPT is everyone’s favorite creativity booster and taskmaster for any writing chore. It is also the new best friend of social media bot accounts. Present on just about every social media platform, bot accounts spread spam, fake news, and bolster follower numbers. Bot accounts used to be easy to spot because their posts were unoriginal and poorly written. Now, with the AI-assisted creativity and excellent sentence-level composition of ChatGPT, bot accounts are sounding a lot more realistic. And the humans managing those hundreds of bot accounts can now create content more quickly than if they were writing each post themselves.
In general, be wary when anyone you don’t know comments on one of your posts or reaches out to you via direct message. If someone says you’ve won a prize but you don’t remember ever entering a contest, ignore it.
With the advent of mainstream AI, everyone should approach every social media post with skepticism. Be on the lookout for anything that seems amiss or too fantastical to be true. And before you share a news item with your following, conduct your own background research to assert that it’s true.
To protect or restore your identity should you fall for any social media scams, you can trust McAfee+. McAfee+ monitors your identity and credit to help you catch suspicious activity early. Also, you can feel secure in the $1 million in identity theft coverage and identity restoration services.
Social media is a fun way to pass the time, keep up with your friends, and learn something new. Don’t be afraid of AI on social media. Instead, laugh at the parodies, ignore and report the fake news, and enjoy social media confidently!
1Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”
2The Hill, “The impending nightmare that AI poses for media, elections”
3Metaphysic, “Create generative AI video that looks real”
The post Be Mindful of These 3 AI Tricks on World Social Media Day appeared first on McAfee Blog.
With the number of cyber threats and breaches dominating the headlines, it can seem like a Herculean task to cover all your cybersecurity bases. We’re aware that there are ten sections on this cybersecurity awareness checklist, but it won’t take hours and hours to tick every box. In fact, some of these areas only require you to check a box on your device or invest in the cybersecurity tools that will handle the rest for you. Also, you may already be doing some of these things!
It’s easy to be cyber smart. Here are the cybersecurity awareness basics to instantly boost your safety and confidence in your online presence.
Software update notifications always seem to ping on the outskirts of your desktop and mobile device at the most inconvenient times. What’s more inconvenient though is having your device hacked. One easy tip to improve your cybersecurity is to update your device software whenever upgrades are available. Most software updates include security patches that smart teams have created to foil cybercriminals. The more outdated your apps or operating system is, the more time criminals have had to work out ways to infiltrate them.
Enabling automatic updates on all your devices only takes a few clicks or taps. Many major updates occur in the early hours of the morning, meaning that you’ll never know your devices were offline. You’ll just wake up to new, secure software!
Just because social media personalities document their entire days literally from the moment they wake up, doesn’t mean you should do the same. It’s best to leave some details about your life a mystery from the internet for various reasons.
The best way to avoid all of the above is to set your online profiles to private and edit your list of followers to only people you have met in real life and trust. Also, you may want to consider revising what you post about and how often.
If you genuinely love sharing moments from your daily life, consider sending a newsletter to a curated group of close friends and family. Aspiring influencers who still wants to capture and publish every aspect of their daily lives should be extremely careful about keeping sensitive details about themselves private, such as blurring their house number, not revealing their hometown, turning off location services, and going by a nickname instead of their full legal name.
Most sites won’t even let you proceed with creating an account if you don’t have a strong enough password. A strong password is one with a mix of capital and lowercase letters, numbers, and special characters. What also makes for an excellent password is one that’s unique. Reusing passwords can be just as risky as using “password123” or your pet’s name plus your birthday as a password. A reused password can put all your online accounts at risk, due to a practice called credential stuffing. Credential stuffing is a tactic where a cybercriminal attempts to input a stolen username and password combination in dozens of random websites to see which doors it opens.
Remembering a different password for each of your online accounts is almost an impossible task. Luckily, password managers make it so you only have to remember one password ever again! Password managers safeguard all your passwords in one secure desktop extension or smartphone app that you can use anywhere.
It’s best to create passwords or passphrases that have a secret meaning that only you know. Stay away from using significant dates, names, or places, because those are easier to guess. You can also leave it up to your password manager to randomly generate a password for you. The resulting unintelligible jumble of numbers, letters, and symbols is virtually impossible for anyone to guess.
Not all corners of the internet are safe to visit. Some dark crevices hide malware that can then sneak onto your device without you knowing. There are various types of malware, but the motive behind all of them is the same: To steal your personally identifiable information (PII) or your device’s power for a cybercriminal’s own financial gain.
Sites that claim to have free downloads of TV shows, movies, and games are notorious for harboring malware. Practice safe downloading habits, such as ensuring the site is secure, checking to see that it looks professional, and inspecting the URLs for suspicious file extensions.
Additionally, not all internet connections are free from prying eyes. Public Wi-Fi networks – like those in cafes, libraries, hotels, and transportation hubs – are especially prone. Because anyone can connect to a public network without needing a password, cybercriminals can digitally eavesdrop on other people on the same network. It’s unsafe to do your online banking, shopping, and other activities that deal with your financial or sensitive personal information while on public Wi-Fi.
However, there is one way to do so safely, and that’s with a virtual private network (VPN). A VPN is a type of software you can use on your smartphone, tablet, laptop, or desktop. It encrypts all your outgoing data, making it nearly impossible for a cybercriminal to snoop on your internet session.
You’ve likely already experienced a phishing attempt, whether you were aware of it or not. Phishing is a common tactic used to eke personal details from unsuspecting or trusting people. Phishers often initiate contact through texts, emails, or social media direct messages, and they aim to get enough information to break into your online accounts or to impersonate you.
AI text generator tools are making it more difficult to pinpoint a phisher, as messages can seem very humanlike. Typos and nonsensical sentences used to be the main indicator of a phishing attempt, but text generators generally use correct spelling and grammar. Here are a few tell-tale signs of a phishing attempt:
Never engage with a phishing attempt. Do not forward the message or respond to them and never click on any links included in their message. The links could direct to malicious sites that could infect your device with malware or spyware.
Before you delete the message, block the sender, mark the message as junk, and report the phisher. Reporting can go a long way toward hopefully preventing the phisher from targeting someone else.
When a security breach occurs, you can be sure that the news will report it. Plus, it’s the law for companies to notify the Federal Trade Commission of a breach. Keep a keen eye on the news and your inbox for notifications about recent breaches. Quick action is necessary to protect your personal and financial information, which is why you should be aware of current events.
The moment you hear about a breach on the news or see an email from a company to its customers about an incident, change your account’s password and double check your account’s recent activity to ensure nothing is amiss. Then await further action communicated through official company correspondences and official channels.
Cybercriminals aren’t above adding insult to injury and further scamming customers affected in breaches. Phishers may spam inboxes impersonating the company and sending malware-laden links they claim will reset your password. Continue to scrutinize your messages and keep an eye on the company’s official company website and verified social media accounts to ensure you’re getting company-approved advice.
One great mantra to guide your cybersecurity habits is: If you connect it, protect it. This means that any device that links to the internet should have security measures in place to shield it from cybercriminals. Yes, this includes your smart TV, smart refrigerator, smart thermostat, and smart lightbulbs!
Compose a list of the smart home devices you own. (You probably have more than you thought!) Then, make sure that every device is using a password you created, instead of the default password the device came with. Default passwords can be reused across an entire line of appliances. So, if a cybercriminal cracks the code on someone else’s smart washing machine, that could mean they could weasel their way into yours with the same password.
Another way to secure your connected home devices is by enabling two-factor authentication (2FA). This usually means enrolling your phone number or email address with the device and inputting one-time codes periodically to log into the connected device. 2FA is an excellent way to frustrate a cybercriminal, as it’s extremely difficult for them to bypass this security measure. It may add an extra 15 seconds to your login process, but the peace of mind is worth the minor inconvenience.
Finally, encase your entire home network with a secure router, or the device that connects your home Wi-Fi network to the internet. Again, change the password from the factory setting. And if you decide to rename the network, have fun with it but leave your name and address out of the new name.
When flip phones arrived on the scene in the 1990s and early 2000s, the worst that happened when they went missing was that you lost a cache of your stored text messages and call history. Now, when you misplace or have your smartphone stolen, it can seem like your whole online life vanished. Mobile devices store a lot of our sensitive information, so that’s why it’s key to not only safeguard your accounts but the devices that house them.
The best way to lock your device against anyone but yourself is to set up face or fingerprint ID. This makes it virtually impossible for a criminal to open your device. Also, passcode- or password-protect all your devices. It may seem like an inconvenience now, but your fingers will soon be able to glide across the keyboard or number pad fluently in just a few days, adding maybe an extra second to opening your device.
Another way to safeguard your device and the important information within it is to disable your favorite internet browser from auto-filling your passwords and credit card information. In the hands of a criminal, these details could lead to significant losses. A password manager here comes in handy for quick and secure password and username inputting.
Credit experts recommend checking your credit at least once yearly, but there’s no harm in checking your credit score more often. It’s only hard inquiries (or credit checks initiated by lenders) that may lower your credit score. Consider making it a habit to check your credit once every quarter. The first signs of identity theft often appear in a drastically lower credit score, which means that someone may be opening lines of credit in your name.
Also, if you’re not planning to apply for a new credit card or a loan anytime soon, why not lock your credit so no one can access it? A credit freeze makes it so that no one (yourself included) can touch it, thus keeping it out of the hands of thieves.
Picking up the pieces after a thief steals your identity is expensive, tedious, and time-consuming. Identity remediation includes reaching out to all three credit bureaus, filing reports, and spending hours tracking down your PII that’s now strewn across the internet.
Identity protection services can guard your identity so you hopefully avoid this entire scenario altogether. McAfee identity monitoring tracks the dark web for you and alerts you, on average, ten months sooner that something is amiss when compared to similar services. And if something does happen to your identity, McAfee identity restoration services offers $1 million in identity restoration and lends its support to help you get your identity and credit back in order.
The best complement to your newfound excellent cyber habits is a toolbelt of excellent services to patch any holes in your defense. McAfee+ includes all the services you need to boost your peace of mind about your online identity and privacy. You can surf public Wi-Fi safely with its secure VPN, protect your device with antivirus software, freeze your credit with security freeze, keep tabs on your identity, and more!
The post 10 Easy Things You Can Do Today to Improve Your Cybersecurity appeared first on McAfee Blog.