FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Chinese Hackers Using Deepfakes in Advanced Mobile Banking Malware Attacks

A Chinese-speaking threat actor codenamed GoldFactory has been attributed to the development of highly sophisticated banking trojans, including a previously undocumented iOS malware called GoldPickaxe that's capable of harvesting identity documents, facial recognition data, and intercepting SMS. "The GoldPickaxe family is available for both iOS and Android platforms,"

How to Protect School Children From Deep Fakes

Deep fakes are a growing concern in the age of digital media and can be extremely dangerous for school children. Deep fakes are digital images, videos, or audio recordings that have been manipulated to look or sound like someone else. They can be used to spread misinformation, create harassment, and even lead to identity theft. With the prevalence of digital media, it’s important to protect school children from deep fakes.  

Here are some tips to help protect school children from deep fakes:  

1. Educate students on deep fakes.

Educating students on deep fakes is an essential step in protecting them from the dangers of these digital manipulations. Schools should provide students with information about the different types of deep fakes and how to spot them.  

2. Encourage students to be media literate.

Media literacy is an important skill that students should have in order to identify deep fakes and other forms of misinformation. Schools should provide students with resources to help them understand how to evaluate the accuracy of a digital image or video.  

3. Promote digital safety. 

Schools should emphasize the importance of digital safety and provide students with resources on how to protect their online identities. This includes teaching students about the risks of sharing personal information online, using strong passwords, and being aware of phishing scams.  

4. Monitor online activity. 

Schools should monitor online activity to ensure that students are not exposed to deep fakes or other forms of online harassment. Schools should have policies in place to protect students from online bullying and harassment, and they should take appropriate action if they find any suspicious activity.  

By following these tips, schools can help protect their students from the dangers of deep fakes. Educating students on deep fakes, encouraging them to be media literate, promoting digital safety, and monitoring online activity are all important steps to ensure that school children are safe online. 

Through quipping students with the tools they need to navigate the online world, schools can also help them learn how to use digital technology responsibly. Through educational resources and programs, schools can teach students the importance of digital citizenship and how to use digital technology ethically and safely. Finally, schools should promote collaboration and communication between parents, students, and school administration to ensure everyone is aware of the risks of deep fakes and other forms of online deception.  

Deep fakes have the potential to lead to identity theft, particularly if deep fakes tools are used to steal the identities of students or even teachers. McAfee’s Identity Monitoring Service, as part of McAfee+, monitors the dark web for your personal info, including email, government IDs, credit card and bank account info, and more. We’ll help keep your personal info safe, with early alerts if your data is found on the dark web, so you can take action to secure your accounts before they’re used for identity theft. 

 

The post How to Protect School Children From Deep Fakes appeared first on McAfee Blog.

This Election Season, Be on the Lookout for AI-generated Fake News

It’s that time of year again: election season! You already know what to expect when you flip on the TV. Get ready for a barrage of commercials, each candidate saying enough to get you to like them but nothing specific enough to which they must stay beholden should they win.  

What you might not expect is for sensationalist election “news” to barge in uninvited on your screens. Fake news – or exaggerated or completely falsified articles claiming to be unbiased and factual journalism, often spread via social media – can pop up anytime and anywhere. This election season’s fake news machine will be different than previous years because of the emergence of mainstream artificial intelligence tools. 

AI’s Role in Fake News Generation 

Here are a few ways desperate zealots may use various AI tools to stir unease and spread misinformation around the upcoming election. 

Deepfake 

We’ve had time to learn and operate by the adage of “Don’t believe everything you read on the internet.” But now, thanks to deepfake, that lesson must extend to “Don’t believe everything you SEE on the internet.” Deepfake is the digital manipulation of a video or photo. The result often depicts a scene that never happened. At a quick glance, deepfakes can look very real! Some still look real after studying them for a few minutes. 

People may use deepfake to paint a candidate in a bad light or to spread sensationalized false news reports. For example, a deepfake could make it look like a candidate flashed a rude hand gesture or show a candidate partying with controversial public figures.  

AI Voice Synthesizers 

According to McAfee’s Beware the Artificial Imposter report, it only takes three seconds of authentic audio and minimal effort to create a mimicked voice with 85% accuracy. When someone puts their mind to it and takes the time to hone the voice clone, they can achieve a 95% voice match to the real deal. 

Well-known politicians have thousands of seconds’ worth of audio clips available to anyone on the internet, giving voice cloners plenty of samples to choose from. Fake news spreaders could employ AI voice generators to add an authentic-sounding talk track to a deepfake video or to fabricate a snappy and sleazy “hot mike” clip to share far and wide online. 

AI Text Generators 

Programs like ChatGPT and Bard can make anyone sound intelligent and eloquent. In the hands of rabble-rousers, AI text generation tools can create articles that sound almost professional enough to be real. Plus, AI allows people to churn out content quickly, meaning that people could spread dozens of fake news reports daily. The number of fake articles is only limited by the slight imagination necessary to write a short prompt. 

How to Spot AI-assisted Fake News

Before you get tricked by a fake news report, here are some ways to spot a malicious use of AI intended to mislead your political leanings: 

  • Distorted images. Fabricated images and videos aren’t perfect. If you look closely, you can often spot the difference between real and fake. For example, AI-created art often adds extra fingers or creates faces that look blurry.  
  • Robotic voices. When someone claims an audio clip is legitimate, listen closely to the voice as it could be AI-generated. AI voice synthesizers give themselves away not when you listen to the recording as a whole, but when you break it down syllable by syllable. A lot of editing is usually involved in fine tuning a voice clone. AI voices often make awkward pauses, clip words short, or put unnatural emphasis in the wrong places. Remember, most politicians are expert public speakers, so genuine speeches are likely to sound professional and rehearsed.  
  • Strong emotions. No doubt about it, politics touch some sensitive nerves; however, if you see a post or “news report” that makes you incredibly angry or very sad, step away. Similar to phishing emails that urge readers to act without thinking, fake news reports stir up a frenzy – manipulating your emotions instead of using facts – to sway your way of thinking. 

Share Responsibly and Question Everything  

Is what you’re reading or seeing or hearing too bizarre to be true? That means it probably isn’t. If you’re interested in learning more about a political topic you came across on social media, do a quick search to corroborate a story. Have a list of respected news establishments bookmarked to make it quick and easy to ensure the authenticity of a report. 

If you encounter fake news, the best way you can interact with it is to ignore it. Or, in cases where the content is offensive or incendiary, you should report it. Even if the fake news is laughably off-base, it’s still best not to share it with your network, because that’s exactly what the original poster wants: For as many people as possible to see their fabricated stories. All it takes is for someone within your network to look at it too quickly, believe it, and then perpetuate the lies. 

It’s great if you’re passionate about politics and the various issues on the ballot. Passion is a powerful driver of change. But this election season, try to focus on what unites us, not what divides us. 

The post This Election Season, Be on the Lookout for AI-generated Fake News appeared first on McAfee Blog.

AI in the Wild: Malicious Applications of Mainstream AI Tools

By: McAfee

It’s not all funny limericks, bizarre portraits, and hilarious viral skits. ChatGPT, Bard, DALL-E, Craiyon, Voice.ai, and a whole host of other mainstream artificial intelligence tools are great for whiling away an afternoon or helping you with your latest school or work assignment; however, cybercriminals are bending AI tools like these to aid in their schemes, adding a whole new dimension to phishing, vishing, malware, and social engineering.  

Here are some recent reports of AI’s use in scams plus a few pointers that might tip you off should any of these happen to you. 

1. AI Voice Scams

Vishing – or phishing over the phone – is not a new scheme; however, AI voice mimickers are making these scamming phone calls more believable than ever. In Arizona, a fake kidnapping phone call caused several minutes of panic for one family, as a mother received a demand for ransom to release her alleged kidnapped daughter. On the phone, the mother heard a voice that sounded exactly like her child’s, but it turned out to be an AI-generated facsimile.    

In reality, the daughter was not kidnapped. She was safe and sound. The family didn’t lose any money because they did the right thing: They contacted law enforcement and kept the scammer on the phone while they located the daughter.1 

Imposter scams accounted for a loss of $2.6 billion in the U.S. in 2022. Emerging AI scams could increase that staggering total. Globally, about 25% of people have either experienced an AI voice scam or know someone who has, according to McAfee’s Beware the Artificial Imposter report. Additionally, the study discovered that 77% of voice scam targets lost money as a result.  

How to hear the difference 

No doubt about it, it’s frightening to hear a loved one in distress, but try to stay as calm as possible if you receive a phone call claiming to be someone in trouble. Do your best to really listen to the “voice” of your loved one. AI voice technology is incredible, but there are still some kinks in the technology. For example, does the voice have unnatural hitches? Do words cut off just a little too early? Does the tone of certain words not quite match your loved one’s accent? To pick up on these small details, a level head is necessary. 

What you can do as a family today to avoid falling for an AI vishing scam is to agree on a family password. This can be an obscure word or phrase that is meaningful to you. Keep this password to yourselves and never post about it on social media. This way, if a scammer ever calls you claiming to have or be a family member, this password could determine a fake emergency from a real one. 

2. Deepfake Ransom and Fake Advertisements

Deepfake, or the digital manipulation of an authentic image, video, or audio clip, is an AI capability that unsettles a lot of people. It challenges the long-held axiom that “seeing is believing.” If you can’t quite believe what you see, then what’s real? What’s not? 

The FBI is warning the public against a new scheme where cybercriminals are editing explicit footage and then blackmailing innocent people into sending money or gift cards in exchange for not posting the compromising content.2 

Deepfake technology was also at the center of an incident involving a fake ad. A scammer created a fake ad depicting Martin Lewis, a trusted finance expert, advocating for an investment venture. The Facebook ad attempted to add legitimacy to its nefarious endeavor by including the deepfaked Lewis.3  

How to respond to ransom demands and questionable online ads 

No response is the best response to a ransom demand. You’re dealing with a criminal. Who’s to say they won’t release their fake documents even if you give in to the ransom? Involve law enforcement as soon as a scammer approaches you, and they can help you resolve the issue. 

Just because a reputable social media platform hosts an advertisement doesn’t mean that the advertiser is a legitimate business. Before buying anything or investing your money with a business you found through an advertisement, conduct your own background research on the company. All it takes is five minutes to look up its Better Business Bureau rating and other online reviews to determine if the company is reputable. 

To identify a deepfake video or image, check for inconsistent shadows and lighting, face distortions, and people’s hands. That’s where you’ll most likely spot small details that aren’t quite right. Like AI voices, deepfake technology is often accurate, but it’s not perfect. 

3. AI-generated Malware and Phishing Emails

Content generation tools have some safeguards in place to prevent them from creating text that could be used illegally; however, some cybercriminals have found ways around those rules and are using ChatGPT and Bard to assist in their malware and phishing operations. For example, if a criminal asked ChatGPT to write a key-logging malware, it would refuse. But if they rephrased and asked it to compose code that captures keystrokes, it may comply with that request. One researcher demonstrated that even someone with little knowledge of coding could use ChatGPT, thus making malware creation simpler and more available than ever.4 Similarly, AI text generation tools can create convincing phishing emails and create them quickly. In theory, this could speed up a phisher’s operation and widen their reach. 

How to avoid AI-written malware and phishing attempts 

You can avoid AI-generated malware and phishing correspondences the same way you deal with the human-written variety: Be careful and distrust anything that seems suspicious. To steer clear of malware, stick to websites you know you can trust. A safe browsing tool like McAfee web protection – which is included in McAfee+ – can doublecheck that you stay off of sketchy websites. 

As for phishing, when you see emails or texts that demand a quick response or seem out of the ordinary, be on alert. Traditional phishing correspondences are usually riddled with typos, misspellings, and poor grammar. AI-written lures are often written well and rarely contain errors. This means that you must be diligent in vetting every message in your inbox. 

Slow Down, Keep Calm, and Be Confident 

While the debate about regulating AI heats up, the best thing you can do is to use AI responsibly. Be transparent when you use it. And if you suspect you’re encountering a malicious use of AI, slow down and try your best to evaluate the situation with a clear mind. AI can create some convincing content, but trust your instincts and follow the above best practices to keep your money and personal information out of the hands of cybercriminals. 

1CNN, “‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping 

2NBC News, “FBI warns about deepfake porn scams 

3BBC, “Martin Lewis felt ‘sick’ seeing deepfake scam ad on Facebook 

4Dark Reading, “Researcher Tricks ChatGPT Into Building Undetectable Steganoraphy Malware 

The post AI in the Wild: Malicious Applications of Mainstream AI Tools appeared first on McAfee Blog.

10 Artificial Intelligence Buzzwords You Should Know

Artificial intelligence used to be reserved for the population’s most brilliant scientists and isolated in the world’s top laboratories. Now, AI is available to anyone with an internet connection. Tools like ChatGPT, Voice.ai, DALL-E, and others have brought AI into daily life, but sometimes the terms used to describe their capabilities and inner workings are anything but mainstream. 

Here are 10 common terms you’ll likely to hear in the same sentence as your favorite AI tool, on the nightly news, or by the water cooler. Keep this AI dictionary handy to stay informed about this popular (and sometimes controversial) topic. 

AI-generated Content 

AI-generated content is any piece of written, audio, or visual media that was created partially or completely by an artificial intelligence-powered tool. 

If someone uses AI to create something, it doesn’t automatically mean they cheated or irresponsibly cut corners. AI is often a great place to start when creating outlines, compiling thought-starters, or seeking a new way of looking at a problem.  

AI Hallucination 

When your question stumps an AI, it doesn’t always admit that it doesn’t know the answer. So, instead of not giving an answer, it’ll make one up that it thinks you want to hear. This made-up answer is known as an AI hallucination. 

One real-world case of a costly AI hallucination occurred in New York where a lawyer used ChatGPT to write a brief. The brief seemed complete and cited its sources, but it turns out that none of the sources existed.1 It was all a figment of the AI’s “imagination.”  

Black Box 

To understand the term black box, imagine the AI as a system of cogs, pulleys, and conveyer belts housed within a box. In a see-through box, you can see how the input is transformed into the final product; however, some AI are referred to as a black box. That means you don’t know how the AI arrived at its conclusions. The AI completely hides its reasoning process. A black box can be a problem if you’d like to doublecheck the AI’s work. 

Deepfake 

Deepfake is the manipulation of a photo, video, or audio clip to portray events that never happened. Often used for humorous social media skits and viral posts, unsavory characters are also leveraging deepfake to spread fake news reports or scam people.  

For example, people are inserting politicians into unflattering poses and photo backgrounds. Sometimes the deepfake is intended to get a laugh, but other times the deepfake creator intends to spark rumors that could lead to dissent or tarnish the reputation of the photo subject. One tip to spot a deepfake image is to look at the hands and faces of people in the background. Deepfakes often add or subtract fingers or distort facial expressions. 

AI-assisted audio impersonations – which are considered deepfakes – are also rising in believability. According to McAfee’s “Beware the Artificial Imposter” report, 25% of respondents globally said that a voice scam happened either to themselves or to someone they know. Seventy-seven percent of people who were targeted by a voice scam lost money as a result.  

Deep Learning 

The closer an AI’s thinking process is to the human brain, the more accurate the AI is likely to be. Deep learning involves training an AI to reason and recall information like a human, meaning that the machine can identify patterns and make predictions. 

Explainable AI 

Explainable AI – or white box – is the opposite of black box AI. An explainable AI model always shows its work and how it arrived at its conclusion. Explainable AI can boost your confidence in the final output because you can doublecheck what went into the answer. 

Generative AI 

Generative AI is the type of artificial intelligence that powers many of today’s mainstream AI tools, like ChatGPT, Bard, and Craiyon. Like a sponge, generative AI soaks up huge amounts of data and recalls it to inform every answer it creates. 

Machine Learning 

Machine learning is integral to AI, because it lets the AI learn and continually improve. Without explicit instructions to do so, machine learning within AI allows the AI to get smarter the more it’s used. 

Responsible AI 

People must not only use AI responsibly, but the people designing and programming AI must do so responsibly, too. Technologists must ensure that the data the AI depends on is accurate and free from bias. This diligence is necessary to confirm that the AI’s output is correct and without prejudice.  

Sentient 

Sentient is an adjective that means someone or some thing is aware of feelings, sensations, and emotions. In futuristic movies depicting AI, the characters’ world goes off the rails when the robots become sentient, or when they “feel” human-like emotions. While it makes for great Hollywood drama, today’s AI is not sentient. It doesn’t empathize or understand the true meanings of happiness, excitement, sadness, or fear. 

So, even if an AI composed a short story that is so beautiful it made you cry, the AI doesn’t know that what it created was touching. It was just fulfilling a prompt and used a pattern to determine which word to choose next.  

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

The post 10 Artificial Intelligence Buzzwords You Should Know appeared first on McAfee Blog.

Be Mindful of These 3 AI Tricks on World Social Media Day

By: McAfee

Happy World Social Media Day! Today’s a day about celebrating the life-long friendships you’ve made thanks to social media. Social media was invented to help users meet new people with shared interests, stay in touch, and learn more about world. Facebook, Twitter, Instagram, Reddit, TikTok, LinkedIn, and the trailblazing MySpace have all certainly succeeded in those aims. 

This is the first World Social Media Day where artificial intelligence (AI) joins the party. AI has existed in many forms for decades, but it’s only recently that AI-powered apps and tools are available in the pockets and homes of just about everyone. ChatGPT, Voice.ai, DALL-E, and others are certainly fun to play with and can even speed up your workday.  

While scrolling through hilarious videos and commenting on your friends’ life milestones are practically national pastimes, some people are making it their pastime to fill our favorite social media feeds with AI-generated content. Not all of it is malicious, but some AI-generated social media posts are scams.  

Here are some examples of common AI-generated content that you’re likely to encounter on social media. 

AI Voice Generation 

Have you scrolled through your video feed and come across voices that sound exactly like the current and former presidents? And are they playing video games together? Comic impersonators can be hilariously accurate with their copycatting, but the voice track to this video is spot on. This series of videos, created by TikToker Voretecks, uses AI voice generation to mimic presidential voices and pit them against each other to bring joy to their viewers.1 In this case, AI-generated voices are mostly harmless, since the videos are in jest. Context clues make it obvious that the presidents didn’t gather to hunt rogue machines together. 

AI voice generation turns nefarious when it’s meant to trick people into thinking or acting a certain way. For example, an AI voiceover made it look like a candidate for Chicago mayor said something inflammatory that he never said.2 Fake news is likely to skyrocket with the fierce 2024 election on the horizon. Social media sites, especially Twitter, are an effective avenue for political saboteurs to spread their lies far and wide to discredit their opponent. 

Finally, while it might not appear on your social media feed, scammers can use what you post on social media to impersonate your voice. According to McAfee’s Beware the Artificial Imposters Report, a scammer requires only three seconds of audio to clone your voice. From there, the scammer may reach out to your loved ones with extremely realistic phone calls to steal money or sensitive personal information. The report also found that of the people who lost money to an AI voice scam, 36% said they lost between $500 and $3,000. 

To keep your voice out of the hands of scammers, perhaps be more mindful of the videos or audio clips you post publicly. Also, consider having a secret safe word with your friends and family that would stump any would-be scammer.  

Deepfake 

Deepfake, or the alteration of an existing photo or video of a real person that shows them doing something that never happened, is another tactic used by social media comedians and fake news spreaders alike. In the case of the former, one company founded their entire business upon deepfake. The company is most famous for its deepfakes of Tom Cruise, though it’s evolved into impersonating other celebrities, generative AI research, and translation. 

When you see videos or images on social media that seem odd, look for a disclaimer – either on the post itself or in the poster’s bio – about whether the poster used deepfake technology to create the content. A responsible social media user will alert their audiences when the content they post is AI generated.  

Again, deepfake and other AI-altered images become malicious when they cause social media viewers to think or act a certain way. Fake news outlets may portray a political candidate doing something embarrassing to sway voters. Or an AI-altered image of animals in need may tug at the heartstrings of social media users and cause them to donate to a fake fundraiser. Deepfake challenges the saying “seeing is believing.” 

ChatGPT and Bot Accounts 

ChatGPT is everyone’s favorite creativity booster and taskmaster for any writing chore. It is also the new best friend of social media bot accounts. Present on just about every social media platform, bot accounts spread spam, fake news, and bolster follower numbers. Bot accounts used to be easy to spot because their posts were unoriginal and poorly written. Now, with the AI-assisted creativity and excellent sentence-level composition of ChatGPT, bot accounts are sounding a lot more realistic. And the humans managing those hundreds of bot accounts can now create content more quickly than if they were writing each post themselves. 

In general, be wary when anyone you don’t know comments on one of your posts or reaches out to you via direct message. If someone says you’ve won a prize but you don’t remember ever entering a contest, ignore it. 

Take Every Post With a Grain of Salt 

With the advent of mainstream AI, everyone should approach every social media post with skepticism. Be on the lookout for anything that seems amiss or too fantastical to be true. And before you share a news item with your following, conduct your own background research to assert that it’s true. 

To protect or restore your identity should you fall for any social media scams, you can trust McAfee+. McAfee+ monitors your identity and credit to help you catch suspicious activity early. Also, you can feel secure in the $1 million in identity theft coverage and identity restoration services. 

Social media is a fun way to pass the time, keep up with your friends, and learn something new. Don’t be afraid of AI on social media. Instead, laugh at the parodies, ignore and report the fake news, and enjoy social media confidently! 

1Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”  

2The Hill, “The impending nightmare that AI poses for media, elections 

3Metaphysic, “Create generative AI video that looks real 

The post Be Mindful of These 3 AI Tricks on World Social Media Day appeared first on McAfee Blog.

Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam

Three seconds of audio is all it takes.  

Cybercriminals have taken up newly forged artificial intelligence (AI) voice cloning tools and created a new breed of scam. With a small sample of audio, they can clone the voice of nearly anyone and send bogus messages by voicemail or voice messaging texts. 

The aim, most often, is to trick people out of hundreds, if not thousands, of dollars. 

The rise of AI voice cloning attacks  

Our recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had. Further, our research team at McAfee Labs discovered just how easily cybercriminals can pull off these scams. 

With a small sample of a person’s voice and a script cooked up by a cybercriminal, these voice clone messages sound convincing, 70% of people in our worldwide survey said they weren’t confident they could tell the difference between a cloned voice and the real thing. 

Cybercriminals create the kind of messages you might expect. Ones full of urgency and distress. They will use the cloning tool to impersonate a victim’s friend or family member with a voice message that says they’ve been in a car accident, or maybe that they’ve been robbed or injured. Either way, the bogus message often says they need money right away. 

In all, the approach has proven quite effective so far. One in ten of people surveyed in our study said they received a message from an AI voice clone, and 77% of those victims said they lost money as a result.  

The cost of AI voice cloning attacks  

Of the people who reported losing money, 36% said they lost between $500 and $3,000, while 7% got taken for sums anywhere between $5,000 and $15,000. 

Of course, a clone needs an original. Cybercriminals have no difficulty sourcing original voice files to create their clones. Our study found that 53% of adults said they share their voice data online or in recorded notes at least once a week, and 49% do so up to ten times a week. All this activity generates voice recordings that could be subject to hacking, theft, or sharing (whether accidental or maliciously intentional).  

 

 

Consider that people post videos of themselves on YouTube, share reels on social media, and perhaps even participate in podcasts. Even by accessing relatively public sources, cybercriminals can stockpile their arsenals with powerful source material. 

Nearly half (45%) of our survey respondents said they would reply to a voicemail or voice message purporting to be from a friend or loved one in need of money, particularly if they thought the request had come from their partner or spouse (40%), mother (24%), or child (20%).  

Further, they reported they’d likely respond to one of these messages if the message sender said: 

  • They’ve been in a car accident (48%). 
  • They’ve been robbed (47%). 
  • They’ve lost their phone or wallet (43%). 
  • They needed help while traveling abroad (41%). 

These messages are the latest examples of targeted “spear phishing” attacks, which target specific people with specific information that seems just credible enough to act on it. Cybercriminals will often source this information from public social media profiles and other places online where people post about themselves, their families, their travels, and so on—and then attempt to cash in.  

Payment methods vary, yet cybercriminals often ask for forms that are difficult to trace or recover, such as gift cards, wire transfers, reloadable debit cards, and even cryptocurrency. As always, requests for these kinds of payments raise a major red flag. It could very well be a scam. 

AI voice cloning tools—freely available to cybercriminals 

In conjunction with this survey, researchers at McAfee Labs spent two weeks investigating the accessibility, ease of use, and efficacy of AI voice cloning tools. Readily, they found more than a dozen freely available on the internet. 

These tools required only a basic level of experience and expertise to use. In one instance, just three seconds of audio was enough to produce a clone with an 85% voice match to the original (based on the benchmarking and assessment of McAfee security researchers). Further effort can increase the accuracy yet more. By training the data models, McAfee researchers achieved a 95% voice match based on just a small number of audio files.   

McAfee’s researchers also discovered that that they could easily replicate accents from around the world, whether they were from the US, UK, India, or Australia. However, more distinctive voices were more challenging to copy, such as people who speak with an unusual pace, rhythm, or style. (Think of actor Christopher Walken.) Such voices require more effort to clone accurately and people with them are less likely to get cloned, at least with where the AI technology stands currently and putting comedic impersonations aside.  

 

The research team stated that this is yet one more way that AI has lowered the barrier to entry for cybercriminals. Whether that’s using it to create malware, write deceptive messages in romance scams, or now with spear phishing attacks with voice cloning technology, it has never been easier to commit sophisticated looking, and sounding, cybercrime. 

Likewise, the study also found that the rise of deepfakes and other disinformation created with AI tools has made people more skeptical of what they see online. Now, 32% of adults said their trust in social media is less than it’s ever been before. 

Protect yourself from AI voice clone attacks 

  1. Set a verbal codeword with kids, family members, or trusted close friends. Make sure it’s one only you and those closest to you know. (Banks and alarm companies often set up accounts with a codeword in the same way to ensure that you’re really you when you speak with them.) Make sure everyone knows and uses it in messages when they ask for help. 
  2. Always question the source. In addition to voice cloning tools, cybercriminals have other tools that can spoof phone numbers so that they look legitimate. Even if it’s a voicemail or text from a number you recognize, stop, pause, and think. Does that really sound like the person you think it is? Hang up and call the person directly or try to verify the information before responding.  
  3. Think before you click and share. Who is in your social media network? How well do you really know and trust them? The wider your connections, the more risk you may be opening yourself up to when sharing content about yourself. Be thoughtful about the friends and connections you have online and set your profiles to “friends and families” only so your content isn’t available to the greater public. 
  4. Protect your identity. Identity monitoring services can notify you if your personal information makes its way to the dark web and provide guidance for protective measures. This can help shut down other ways that a scammer can attempt to pose as you. 
  5. Clear your name from data broker sites. How’d that scammer get your phone number anyway? It’s possible they pulled that information off a data broker site. Data brokers buy, collect, and sell detailed personal information, which they compile from several public and private sources, such as local, state, and federal records, in addition to third parties. Our Personal Data Cleanup service scans some of the riskiest data broker sites and shows you which ones are selling your personal info. 

Get the full story 

 

A lot can come from a three-second audio clip. 

With the advent of AI-driven voice cloning tools, cybercriminals have created a new form of scam. With arguably stunning accuracy, these tools can let cybercriminals nearly anyone. All they need is a short audio clip to kick off the cloning process. 

Yet like all scams, you have ways you can protect yourself. A sharp sense of what seems right and wrong, along with a few straightforward security steps can help you and your loved ones from falling for these AI voice clone scams. 

For a closer look at the survey data, along with a nation-by-nation breakdown, download a copy of our report here. 

Survey methodology 

The survey was conducted between January 27th and February 1st, 2023 by Market Research Company MSI-ACI, with people aged 18 years and older invited to complete an online questionnaire. In total 7,000 people completed the survey from nine countries, including the United States, United Kingdom, France, Germany, Australia, India, Japan, Brazil, and Mexico. 

The post Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam appeared first on McAfee Blog.

How to Spot Fake Art and Deepfakes

Artificial intelligence (AI) is making its way from high-tech labs and Hollywood plots into the hands of the general population. ChatGPT, the text generation tool, hardly needs an introduction and AI art generators (like Midjourney and DALL-E) are hot on its heels in popularity. Inputting nonsensical prompts and receiving ridiculous art clips in return is a fun way to spend an afternoon. 

However, while you’re using AI art generators for a laugh, cybercriminals are using the technology to trick people into believing sensationalist fake news, catfish dating profiles, and damaging impersonations. Sophisticated AI-generated art can be difficult to spot, but here are a few signs that you may be viewing a dubious image or engaging with a criminal behind an AI-generated profile. 

What Are AI Art Generators and Deepfakes? 

To better understand the cyberthreats posed by each, here are some quick definitions: 

  • AI art generators. Generative AI is typically the specific type of AI behind art generators. This type of AI is loaded with billions of examples of art. When someone gives it a prompt, the AI flips through its vast library and selects a combination of artworks it thinks will best fulfill the prompt. AI art is a hot topic of debate in the art world because none of the works it creates are technically original. It derives its final product from various artists, the majority of whom haven’t granted the computer program permission to use their creations. 
  • Deepfake. A deepfake is a manipulation of existing photos and videos of real people. The resulting manipulation either makes an entirely new person out of a compilation of real people, or the original subject is manipulated to look like they’re doing something they never did. 

AI art and deepfake aren’t technologies found on the dark web. Anyone can download an AI art or deepfake app, such as FaceStealer and Fleeceware. Because the technology isn’t illegal and it has many innocent uses, it’s difficult to regulate. 

How Do People Use AI Art Maliciously? 

It’s perfectly innocent to use AI art to create a cover photo for your social media profile or to pair it with a blog post. However, it’s best to be transparent with your audience and include a disclaimer or caption saying that it’s not original artwork. AI art turns malicious when people use images to intentionally trick others and gain financially from the trickery. 

Catfish may use deepfake profile pictures and videos to convince their targets that they’re genuinely looking for love. Revealing their real face and identity could put a criminal catfish at risk of discovery, so they either use someone else’s pictures or deepfake an entire library of pictures. 

Fake news propagators may also enlist the help of AI art or a deepfake to add “credibility” to their conspiracy theories. When they pair their sensationalist headlines with a photo that, at quick glance, proves its legitimacy, people may be more likely to share and spread the story. Fake news is damaging to society because of the extreme negative emotions they can generate in huge crowds. The resulting hysteria or outrage can lead to violence in some cases. 

Finally, some criminals may use deepfake to trick face ID and gain entry to sensitive online accounts.     To prevent someone from deepfaking their way into your accounts, protect your accounts with multifactor authentication. That means that more than one method of identification is necessary to open the account. These methods can be one-time codes sent to your cellphone, passwords, answers to security questions, or fingerprint ID in addition to face ID.  

3 Ways to Spot Fake Images 

Before you start an online relationship or share an apparent news story on social media, scrutinize images using these three tips to pick out malicious AI-generated art and deepfake. 

1. Inspect the context around the image.

Fake images usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Phishers are notorious for their poor writing skills. AI-generated text is more difficult to detect because its grammar and spelling are often correct; however, the sentences may seem choppy. 

2. Evaluate the claim.

Does the image seem too bizarre to be real? Too good to be true? Extend this generation’s rule of thumb of “Don’t believe everything you read on the internet” to include “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, at least one other site will report on the event. 

3. Check for distortions.

AI technology often generates a finger or two too many on hands, and a deepfake creates eyes that may have a soulless or dead look to them. Also, there may be shadows in places where they wouldn’t be natural, and the skin tone may look uneven. In deepfaked videos, the voice and facial expressions may not exactly line up, making the subject look robotic and stiff. 

Boost Your Online Safety With McAfee 

Fake images are tough to spot, and they’ll likely get more realistic the more the technology improves. Awareness of emerging AI threats better prepares you to take control of your online life. There are quizzes online that compare deepfake and AI art with genuine people and artworks created by humans. When you have a spare ten minutes, consider taking a quiz and recognizing your mistakes to identify malicious fake art in the future. 

To give you more confidence in the security of your online life, partner with McAfee. McAfee+ Ultimate is the all-in-one privacy, identity, and device security service. Protect up to six members of your family with the family plan, and receive up to $2 million in identity theft coverage. Partner with McAfee to stop any threats that sneak under your watchful eye. 

The post How to Spot Fake Art and Deepfakes appeared first on McAfee Blog.

❌