FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

How to Protect School Children From Deep Fakes

Deep fakes are a growing concern in the age of digital media and can be extremely dangerous for school children. Deep fakes are digital images, videos, or audio recordings that have been manipulated to look or sound like someone else. They can be used to spread misinformation, create harassment, and even lead to identity theft. With the prevalence of digital media, it’s important to protect school children from deep fakes.  

Here are some tips to help protect school children from deep fakes:  

1. Educate students on deep fakes.

Educating students on deep fakes is an essential step in protecting them from the dangers of these digital manipulations. Schools should provide students with information about the different types of deep fakes and how to spot them.  

2. Encourage students to be media literate.

Media literacy is an important skill that students should have in order to identify deep fakes and other forms of misinformation. Schools should provide students with resources to help them understand how to evaluate the accuracy of a digital image or video.  

3. Promote digital safety. 

Schools should emphasize the importance of digital safety and provide students with resources on how to protect their online identities. This includes teaching students about the risks of sharing personal information online, using strong passwords, and being aware of phishing scams.  

4. Monitor online activity. 

Schools should monitor online activity to ensure that students are not exposed to deep fakes or other forms of online harassment. Schools should have policies in place to protect students from online bullying and harassment, and they should take appropriate action if they find any suspicious activity.  

By following these tips, schools can help protect their students from the dangers of deep fakes. Educating students on deep fakes, encouraging them to be media literate, promoting digital safety, and monitoring online activity are all important steps to ensure that school children are safe online. 

Through quipping students with the tools they need to navigate the online world, schools can also help them learn how to use digital technology responsibly. Through educational resources and programs, schools can teach students the importance of digital citizenship and how to use digital technology ethically and safely. Finally, schools should promote collaboration and communication between parents, students, and school administration to ensure everyone is aware of the risks of deep fakes and other forms of online deception.  

Deep fakes have the potential to lead to identity theft, particularly if deep fakes tools are used to steal the identities of students or even teachers. McAfee’s Identity Monitoring Service, as part of McAfee+, monitors the dark web for your personal info, including email, government IDs, credit card and bank account info, and more. We’ll help keep your personal info safe, with early alerts if your data is found on the dark web, so you can take action to secure your accounts before they’re used for identity theft. 

 

The post How to Protect School Children From Deep Fakes appeared first on McAfee Blog.

This Election Season, Be on the Lookout for AI-generated Fake News

It’s that time of year again: election season! You already know what to expect when you flip on the TV. Get ready for a barrage of commercials, each candidate saying enough to get you to like them but nothing specific enough to which they must stay beholden should they win.  

What you might not expect is for sensationalist election “news” to barge in uninvited on your screens. Fake news – or exaggerated or completely falsified articles claiming to be unbiased and factual journalism, often spread via social media – can pop up anytime and anywhere. This election season’s fake news machine will be different than previous years because of the emergence of mainstream artificial intelligence tools. 

AI’s Role in Fake News Generation 

Here are a few ways desperate zealots may use various AI tools to stir unease and spread misinformation around the upcoming election. 

Deepfake 

We’ve had time to learn and operate by the adage of “Don’t believe everything you read on the internet.” But now, thanks to deepfake, that lesson must extend to “Don’t believe everything you SEE on the internet.” Deepfake is the digital manipulation of a video or photo. The result often depicts a scene that never happened. At a quick glance, deepfakes can look very real! Some still look real after studying them for a few minutes. 

People may use deepfake to paint a candidate in a bad light or to spread sensationalized false news reports. For example, a deepfake could make it look like a candidate flashed a rude hand gesture or show a candidate partying with controversial public figures.  

AI Voice Synthesizers 

According to McAfee’s Beware the Artificial Imposter report, it only takes three seconds of authentic audio and minimal effort to create a mimicked voice with 85% accuracy. When someone puts their mind to it and takes the time to hone the voice clone, they can achieve a 95% voice match to the real deal. 

Well-known politicians have thousands of seconds’ worth of audio clips available to anyone on the internet, giving voice cloners plenty of samples to choose from. Fake news spreaders could employ AI voice generators to add an authentic-sounding talk track to a deepfake video or to fabricate a snappy and sleazy “hot mike” clip to share far and wide online. 

AI Text Generators 

Programs like ChatGPT and Bard can make anyone sound intelligent and eloquent. In the hands of rabble-rousers, AI text generation tools can create articles that sound almost professional enough to be real. Plus, AI allows people to churn out content quickly, meaning that people could spread dozens of fake news reports daily. The number of fake articles is only limited by the slight imagination necessary to write a short prompt. 

How to Spot AI-assisted Fake News

Before you get tricked by a fake news report, here are some ways to spot a malicious use of AI intended to mislead your political leanings: 

  • Distorted images. Fabricated images and videos aren’t perfect. If you look closely, you can often spot the difference between real and fake. For example, AI-created art often adds extra fingers or creates faces that look blurry.  
  • Robotic voices. When someone claims an audio clip is legitimate, listen closely to the voice as it could be AI-generated. AI voice synthesizers give themselves away not when you listen to the recording as a whole, but when you break it down syllable by syllable. A lot of editing is usually involved in fine tuning a voice clone. AI voices often make awkward pauses, clip words short, or put unnatural emphasis in the wrong places. Remember, most politicians are expert public speakers, so genuine speeches are likely to sound professional and rehearsed.  
  • Strong emotions. No doubt about it, politics touch some sensitive nerves; however, if you see a post or “news report” that makes you incredibly angry or very sad, step away. Similar to phishing emails that urge readers to act without thinking, fake news reports stir up a frenzy – manipulating your emotions instead of using facts – to sway your way of thinking. 

Share Responsibly and Question Everything  

Is what you’re reading or seeing or hearing too bizarre to be true? That means it probably isn’t. If you’re interested in learning more about a political topic you came across on social media, do a quick search to corroborate a story. Have a list of respected news establishments bookmarked to make it quick and easy to ensure the authenticity of a report. 

If you encounter fake news, the best way you can interact with it is to ignore it. Or, in cases where the content is offensive or incendiary, you should report it. Even if the fake news is laughably off-base, it’s still best not to share it with your network, because that’s exactly what the original poster wants: For as many people as possible to see their fabricated stories. All it takes is for someone within your network to look at it too quickly, believe it, and then perpetuate the lies. 

It’s great if you’re passionate about politics and the various issues on the ballot. Passion is a powerful driver of change. But this election season, try to focus on what unites us, not what divides us. 

The post This Election Season, Be on the Lookout for AI-generated Fake News appeared first on McAfee Blog.

AI in the Wild: Malicious Applications of Mainstream AI Tools

By: McAfee

It’s not all funny limericks, bizarre portraits, and hilarious viral skits. ChatGPT, Bard, DALL-E, Craiyon, Voice.ai, and a whole host of other mainstream artificial intelligence tools are great for whiling away an afternoon or helping you with your latest school or work assignment; however, cybercriminals are bending AI tools like these to aid in their schemes, adding a whole new dimension to phishing, vishing, malware, and social engineering.  

Here are some recent reports of AI’s use in scams plus a few pointers that might tip you off should any of these happen to you. 

1. AI Voice Scams

Vishing – or phishing over the phone – is not a new scheme; however, AI voice mimickers are making these scamming phone calls more believable than ever. In Arizona, a fake kidnapping phone call caused several minutes of panic for one family, as a mother received a demand for ransom to release her alleged kidnapped daughter. On the phone, the mother heard a voice that sounded exactly like her child’s, but it turned out to be an AI-generated facsimile.    

In reality, the daughter was not kidnapped. She was safe and sound. The family didn’t lose any money because they did the right thing: They contacted law enforcement and kept the scammer on the phone while they located the daughter.1 

Imposter scams accounted for a loss of $2.6 billion in the U.S. in 2022. Emerging AI scams could increase that staggering total. Globally, about 25% of people have either experienced an AI voice scam or know someone who has, according to McAfee’s Beware the Artificial Imposter report. Additionally, the study discovered that 77% of voice scam targets lost money as a result.  

How to hear the difference 

No doubt about it, it’s frightening to hear a loved one in distress, but try to stay as calm as possible if you receive a phone call claiming to be someone in trouble. Do your best to really listen to the “voice” of your loved one. AI voice technology is incredible, but there are still some kinks in the technology. For example, does the voice have unnatural hitches? Do words cut off just a little too early? Does the tone of certain words not quite match your loved one’s accent? To pick up on these small details, a level head is necessary. 

What you can do as a family today to avoid falling for an AI vishing scam is to agree on a family password. This can be an obscure word or phrase that is meaningful to you. Keep this password to yourselves and never post about it on social media. This way, if a scammer ever calls you claiming to have or be a family member, this password could determine a fake emergency from a real one. 

2. Deepfake Ransom and Fake Advertisements

Deepfake, or the digital manipulation of an authentic image, video, or audio clip, is an AI capability that unsettles a lot of people. It challenges the long-held axiom that “seeing is believing.” If you can’t quite believe what you see, then what’s real? What’s not? 

The FBI is warning the public against a new scheme where cybercriminals are editing explicit footage and then blackmailing innocent people into sending money or gift cards in exchange for not posting the compromising content.2 

Deepfake technology was also at the center of an incident involving a fake ad. A scammer created a fake ad depicting Martin Lewis, a trusted finance expert, advocating for an investment venture. The Facebook ad attempted to add legitimacy to its nefarious endeavor by including the deepfaked Lewis.3  

How to respond to ransom demands and questionable online ads 

No response is the best response to a ransom demand. You’re dealing with a criminal. Who’s to say they won’t release their fake documents even if you give in to the ransom? Involve law enforcement as soon as a scammer approaches you, and they can help you resolve the issue. 

Just because a reputable social media platform hosts an advertisement doesn’t mean that the advertiser is a legitimate business. Before buying anything or investing your money with a business you found through an advertisement, conduct your own background research on the company. All it takes is five minutes to look up its Better Business Bureau rating and other online reviews to determine if the company is reputable. 

To identify a deepfake video or image, check for inconsistent shadows and lighting, face distortions, and people’s hands. That’s where you’ll most likely spot small details that aren’t quite right. Like AI voices, deepfake technology is often accurate, but it’s not perfect. 

3. AI-generated Malware and Phishing Emails

Content generation tools have some safeguards in place to prevent them from creating text that could be used illegally; however, some cybercriminals have found ways around those rules and are using ChatGPT and Bard to assist in their malware and phishing operations. For example, if a criminal asked ChatGPT to write a key-logging malware, it would refuse. But if they rephrased and asked it to compose code that captures keystrokes, it may comply with that request. One researcher demonstrated that even someone with little knowledge of coding could use ChatGPT, thus making malware creation simpler and more available than ever.4 Similarly, AI text generation tools can create convincing phishing emails and create them quickly. In theory, this could speed up a phisher’s operation and widen their reach. 

How to avoid AI-written malware and phishing attempts 

You can avoid AI-generated malware and phishing correspondences the same way you deal with the human-written variety: Be careful and distrust anything that seems suspicious. To steer clear of malware, stick to websites you know you can trust. A safe browsing tool like McAfee web protection – which is included in McAfee+ – can doublecheck that you stay off of sketchy websites. 

As for phishing, when you see emails or texts that demand a quick response or seem out of the ordinary, be on alert. Traditional phishing correspondences are usually riddled with typos, misspellings, and poor grammar. AI-written lures are often written well and rarely contain errors. This means that you must be diligent in vetting every message in your inbox. 

Slow Down, Keep Calm, and Be Confident 

While the debate about regulating AI heats up, the best thing you can do is to use AI responsibly. Be transparent when you use it. And if you suspect you’re encountering a malicious use of AI, slow down and try your best to evaluate the situation with a clear mind. AI can create some convincing content, but trust your instincts and follow the above best practices to keep your money and personal information out of the hands of cybercriminals. 

1CNN, “‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping 

2NBC News, “FBI warns about deepfake porn scams 

3BBC, “Martin Lewis felt ‘sick’ seeing deepfake scam ad on Facebook 

4Dark Reading, “Researcher Tricks ChatGPT Into Building Undetectable Steganoraphy Malware 

The post AI in the Wild: Malicious Applications of Mainstream AI Tools appeared first on McAfee Blog.

How to Spot Fake Art and Deepfakes

Artificial intelligence (AI) is making its way from high-tech labs and Hollywood plots into the hands of the general population. ChatGPT, the text generation tool, hardly needs an introduction and AI art generators (like Midjourney and DALL-E) are hot on its heels in popularity. Inputting nonsensical prompts and receiving ridiculous art clips in return is a fun way to spend an afternoon. 

However, while you’re using AI art generators for a laugh, cybercriminals are using the technology to trick people into believing sensationalist fake news, catfish dating profiles, and damaging impersonations. Sophisticated AI-generated art can be difficult to spot, but here are a few signs that you may be viewing a dubious image or engaging with a criminal behind an AI-generated profile. 

What Are AI Art Generators and Deepfakes? 

To better understand the cyberthreats posed by each, here are some quick definitions: 

  • AI art generators. Generative AI is typically the specific type of AI behind art generators. This type of AI is loaded with billions of examples of art. When someone gives it a prompt, the AI flips through its vast library and selects a combination of artworks it thinks will best fulfill the prompt. AI art is a hot topic of debate in the art world because none of the works it creates are technically original. It derives its final product from various artists, the majority of whom haven’t granted the computer program permission to use their creations. 
  • Deepfake. A deepfake is a manipulation of existing photos and videos of real people. The resulting manipulation either makes an entirely new person out of a compilation of real people, or the original subject is manipulated to look like they’re doing something they never did. 

AI art and deepfake aren’t technologies found on the dark web. Anyone can download an AI art or deepfake app, such as FaceStealer and Fleeceware. Because the technology isn’t illegal and it has many innocent uses, it’s difficult to regulate. 

How Do People Use AI Art Maliciously? 

It’s perfectly innocent to use AI art to create a cover photo for your social media profile or to pair it with a blog post. However, it’s best to be transparent with your audience and include a disclaimer or caption saying that it’s not original artwork. AI art turns malicious when people use images to intentionally trick others and gain financially from the trickery. 

Catfish may use deepfake profile pictures and videos to convince their targets that they’re genuinely looking for love. Revealing their real face and identity could put a criminal catfish at risk of discovery, so they either use someone else’s pictures or deepfake an entire library of pictures. 

Fake news propagators may also enlist the help of AI art or a deepfake to add “credibility” to their conspiracy theories. When they pair their sensationalist headlines with a photo that, at quick glance, proves its legitimacy, people may be more likely to share and spread the story. Fake news is damaging to society because of the extreme negative emotions they can generate in huge crowds. The resulting hysteria or outrage can lead to violence in some cases. 

Finally, some criminals may use deepfake to trick face ID and gain entry to sensitive online accounts.     To prevent someone from deepfaking their way into your accounts, protect your accounts with multifactor authentication. That means that more than one method of identification is necessary to open the account. These methods can be one-time codes sent to your cellphone, passwords, answers to security questions, or fingerprint ID in addition to face ID.  

3 Ways to Spot Fake Images 

Before you start an online relationship or share an apparent news story on social media, scrutinize images using these three tips to pick out malicious AI-generated art and deepfake. 

1. Inspect the context around the image.

Fake images usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Phishers are notorious for their poor writing skills. AI-generated text is more difficult to detect because its grammar and spelling are often correct; however, the sentences may seem choppy. 

2. Evaluate the claim.

Does the image seem too bizarre to be real? Too good to be true? Extend this generation’s rule of thumb of “Don’t believe everything you read on the internet” to include “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, at least one other site will report on the event. 

3. Check for distortions.

AI technology often generates a finger or two too many on hands, and a deepfake creates eyes that may have a soulless or dead look to them. Also, there may be shadows in places where they wouldn’t be natural, and the skin tone may look uneven. In deepfaked videos, the voice and facial expressions may not exactly line up, making the subject look robotic and stiff. 

Boost Your Online Safety With McAfee 

Fake images are tough to spot, and they’ll likely get more realistic the more the technology improves. Awareness of emerging AI threats better prepares you to take control of your online life. There are quizzes online that compare deepfake and AI art with genuine people and artworks created by humans. When you have a spare ten minutes, consider taking a quiz and recognizing your mistakes to identify malicious fake art in the future. 

To give you more confidence in the security of your online life, partner with McAfee. McAfee+ Ultimate is the all-in-one privacy, identity, and device security service. Protect up to six members of your family with the family plan, and receive up to $2 million in identity theft coverage. Partner with McAfee to stop any threats that sneak under your watchful eye. 

The post How to Spot Fake Art and Deepfakes appeared first on McAfee Blog.

❌