Artificial intelligence (AI) is making its way from high-tech labs and Hollywood plots into the hands of the general population. ChatGPT, the text generation tool, hardly needs an introduction and AI art generators (like Midjourney and DALL-E) are hot on its heels in popularity. Inputting nonsensical prompts and receiving ridiculous art clips in return is a fun way to spend an afternoon.
However, while you’re using AI art generators for a laugh, cybercriminals are using the technology to trick people into believing sensationalist fake news, catfish dating profiles, and damaging impersonations. Sophisticated AI-generated art can be difficult to spot, but here are a few signs that you may be viewing a dubious image or engaging with a criminal behind an AI-generated profile.
To better understand the cyberthreats posed by each, here are some quick definitions:
AI art and deepfake aren’t technologies found on the dark web. Anyone can download an AI art or deepfake app, such as FaceStealer and Fleeceware. Because the technology isn’t illegal and it has many innocent uses, it’s difficult to regulate.
How Do People Use AI Art Maliciously?
It’s perfectly innocent to use AI art to create a cover photo for your social media profile or to pair it with a blog post. However, it’s best to be transparent with your audience and include a disclaimer or caption saying that it’s not original artwork. AI art turns malicious when people use images to intentionally trick others and gain financially from the trickery.
Catfish may use deepfake profile pictures and videos to convince their targets that they’re genuinely looking for love. Revealing their real face and identity could put a criminal catfish at risk of discovery, so they either use someone else’s pictures or deepfake an entire library of pictures.
Fake news propagators may also enlist the help of AI art or a deepfake to add “credibility” to their conspiracy theories. When they pair their sensationalist headlines with a photo that, at quick glance, proves its legitimacy, people may be more likely to share and spread the story. Fake news is damaging to society because of the extreme negative emotions they can generate in huge crowds. The resulting hysteria or outrage can lead to violence in some cases.
Finally, some criminals may use deepfake to trick face ID and gain entry to sensitive online accounts. To prevent someone from deepfaking their way into your accounts, protect your accounts with multifactor authentication. That means that more than one method of identification is necessary to open the account. These methods can be one-time codes sent to your cellphone, passwords, answers to security questions, or fingerprint ID in addition to face ID.
Before you start an online relationship or share an apparent news story on social media, scrutinize images using these three tips to pick out malicious AI-generated art and deepfake.
Fake images usually don’t appear by themselves. There’s often text or a larger article around them. Inspect the text for typos, poor grammar, and overall poor composition. Phishers are notorious for their poor writing skills. AI-generated text is more difficult to detect because its grammar and spelling are often correct; however, the sentences may seem choppy.
Does the image seem too bizarre to be real? Too good to be true? Extend this generation’s rule of thumb of “Don’t believe everything you read on the internet” to include “Don’t believe everything you see on the internet.” If a fake news story is claiming to be real, search for the headline elsewhere. If it’s truly noteworthy, at least one other site will report on the event.
AI technology often generates a finger or two too many on hands, and a deepfake creates eyes that may have a soulless or dead look to them. Also, there may be shadows in places where they wouldn’t be natural, and the skin tone may look uneven. In deepfaked videos, the voice and facial expressions may not exactly line up, making the subject look robotic and stiff.
Fake images are tough to spot, and they’ll likely get more realistic the more the technology improves. Awareness of emerging AI threats better prepares you to take control of your online life. There are quizzes online that compare deepfake and AI art with genuine people and artworks created by humans. When you have a spare ten minutes, consider taking a quiz and recognizing your mistakes to identify malicious fake art in the future.
To give you more confidence in the security of your online life, partner with McAfee. McAfee+ Ultimate is the all-in-one privacy, identity, and device security service. Protect up to six members of your family with the family plan, and receive up to $2 million in identity theft coverage. Partner with McAfee to stop any threats that sneak under your watchful eye.
The post How to Spot Fake Art and Deepfakes appeared first on McAfee Blog.
The COVID-19 pandemic, along with social distancing, has done many things to alter our lives. But in one respect it has merely accelerated a process begun many years ago. We were all spending more and more time online before the virus struck. But now, forced to work, study and socialize at home, the online digital world has become absolutely essential to our communications — and video conferencing apps have become our “face-to-face” window on the world.
The problem is that as users flock to these services, the bad guys are also lying in wait — to disrupt or eavesdrop on our chats, spread malware, and steal our data. Zoom’s problems have perhaps been the most widely publicized, because of its quickly rising popularity, but it’s not the only platform whose users have been potentially at risk. Cisco’s WebEx and Microsoft Teams have also had issues; while other platforms, such as Houseparty, are intrinsically less secure (almost by design for their target audience, as the name suggests).
Let’s take a look at some of the key threats out there and how you can stay safe while video conferencing.
Depending on the platform (designed for work or play) and the use case (business or personal), there are various opportunities for the online attacker to join and disrupt or eavesdrop on video conferencing calls. The latter is especially dangerous if you’re discussing sensitive business information.
Malicious hackers may also look to deliver malware via chats or shared files to take control of your computer, or to steal your passwords and sensitive personal and financial information. In a business context, they could even try to hijack your video conferencing account to impersonate you, in a bid to steal info from or defraud your colleagues or company.
The bad guys may also be able to take advantage of the fact that your home PCs and devices are less well-secured than those at work or school—and that you may be more distracted at home and less alert to potential threats.
To accomplish their goals, malicious hackers can leverage various techniques at their disposal. These can include:
|
|
Zoom has in many ways become the victim of its own success. With daily meeting participants soaring from 10 million in December last year to 200 million by March 2020, all eyes have been focused on the platform. Unfortunately, that also includes hackers. Zoom has been hit by a number of security and privacy issues over the past several months, which include “Zoombombing” (meetings disrupted by uninvited guests), misleading encryption claims, a waiting room vulnerability, credential theft and data collection leaks, and fake Zoom installers. To be fair to Zoom, it has responded quickly to these issues, realigning its development priorities to fix the security and privacy issues discovered by its intensive use.
And Zoom isn’t alone. Earlier in the year, Cisco Systems had its own problem with WebEx, its widely-used enterprise video conferencing system, when it discovered a flaw in the platform that could allow a remote, unauthenticated attacker to enter a password-protected video conferencing meeting. All an attacker needed was the meeting ID and a WebEx mobile app for iOS or Android, and they could have barged in on a meeting, no authentication necessary. Cisco quickly moved to fix the high-severity vulnerability, but other flaws (also now fixed) have cropped up in WebEx’s history, including one that could enable a remote attacker to send a forged request to the system’s server.
More recently, Microsoft Teams joined the ranks of leading business videoconferencing platforms with potentially deadly vulnerabilities. On April 27 it surfaced that for at least three weeks (from the end of February till the middle of March), a malicious GIF could have stolen user data from Teams accounts, possibly across an entire company. The vulnerability was patched on April 20—but it’s a reminder to potential video conferencing users that even leading systems such as Zoom, WebEx, and Teams aren’t fool-proof and require periodic vulnerability and security fixes to keep them safe and secure. This is compounded during the COVID-19 pandemic when workers are working from home and connecting to their company’s network and systems via possibly unsecure home networks and devices.
So how do you choose the best, most secure, video conferencing software for your work-at-home needs? There are many solutions on the market today. In fact, the choice can be dizzying. Some simply enable video or audio meetings/calls, while others also allow for sharing and saving of documents and notes. Some are only appropriate for one-on-one connections or small groups, while others can scale to thousands.
In short, you’ll need to choose the video conferencing solution most appropriate to your needs, while checking if it meets a minimum set of security standards for working at home. This set of criteria should include end-to-end encryption, automatic and frequent security updates, the use of auto-generated meeting IDs and strong access controls, a program for managing vulnerabilities, and last but not least, good privacy practices by the company.
Some video conferencing options alongside Zoom, WebEx, and Teams include:
|
|
Whatever video conferencing platform you use, it’s important to bear in mind that cyber-criminals will always be looking to take advantage of any security gaps they can find — in the tool itself or your use of it. So how do you secure your video conferencing apps? Some tips listed here are Zoom-specific, but consider their equivalents in other platforms as general best-practice tips. Depending on the use case, you might choose to not enable some of the options here.
|
|
Fortunately, Trend Micro has a range of capabilities that can support your efforts to stay safe while using video conferencing services.
Trend Micro Home Network Security (HNS) protects every device in your home connected to the internet. That means it will protect you from malicious links and attachments in phishing emails spoofed to appear as if sent from video conferencing firms, as well as from those sent by hackers that may have covertly entered a meeting. Its Vulnerability Check can identify any vulnerabilities in your home devices and PCs, including work laptops, and its Remote Access Protection can reduce the risk of tech support scams and unwanted remote connections to your device. Finally, it allows parents to control their kids’ usage of video conferencing applications, to limit their exposure.
Trend Micro Security also offers protection against email, file, and web threats on your devices. Note too, that Password Manager is automatically installed with Maximum Security to help users create unique, strong passwords for each application/website they use, including video conferencing sites.
Finally, Trend Micro WiFi Protection (multi-platform) / VPN Proxy One (Mac and iOS) offer VPN connections from your home to the internet, creating secure encrypted tunnels for traffic to flow down. The VPN apps work on both Wi-Fi and Ethernet connections. This could be useful for users concerned their video conferencing app isn’t end-to-end encrypted, or for those wishing to protect their identity and personal information when interacting on these apps.
The post From Bugs to Zoombombing: How to Stay Safe in Online Meetings appeared first on .