FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayMcAfee Blogs

How to Protect Your Privacy From Generative AI

With the rise of artificial intelligence (AI) and machine learning, concerns about the privacy of personal data have reached an all-time high. Generative AI is a type of AI that can generate new data from existing data, such as images, videos, and text. This technology can be used for a variety of purposes, from facial recognition to creating “deepfakes” and manipulating public opinion. As a result, it’s important to be aware of the potential risks that generative AI poses to your privacy.  

In this blog post, we’ll discuss how to protect your privacy from generative AI. 

1. Understand what generative AI is and how it works.

Generative AI is a type of AI that uses existing data to generate new data. It’s usually used for things like facial recognition, speech recognition, and image and video generation. This technology can be used for both good and bad purposes, so it’s important to understand how it works and the potential risks it poses to your privacy. 

2. Be aware of the potential risks.

Generative AI can be used to create deepfakes, which are fake images or videos that are generated using existing data. This technology can be used for malicious purposes, such as manipulating public opinion, identity theft, and spreading false information. It’s important to be aware of the potential risks that generative AI poses to your privacy. 

3. Be careful with the data you share online.

Generative AI uses existing data to generate new data, so it’s important to be aware of what data you’re sharing online. Be sure to only share data that you’re comfortable with and be sure to use strong passwords and two-factor authentication whenever possible. 

4. Use privacy-focused tools.

There are a number of privacy-focused tools available that can help protect your data from generative AI. These include tools like privacy-focused browsers, VPNs, and encryption tools. It’s important to understand how these tools work and how they can help protect your data. 

 5. Stay informed.

It’s important to stay up-to-date on the latest developments in generative AI and privacy. Follow trusted news sources and keep an eye out for changes in the law that could affect your privacy. 

By following these tips, you can help protect your privacy from generative AI. It’s important to be aware of the potential risks that this technology poses and to take steps to protect yourself and your data. 

Of course, the most important step is to be aware and informed. Research and organizations that are using generative AI and make sure you understand how they use your data. Be sure to read the terms and conditions of any contracts you sign and be aware of any third parties that may have access to your data. Additionally, be sure to look out for notifications of changes in privacy policies and take the time to understand any changes that could affect you. 

Finally, make sure to regularly check your accounts and reports to make sure that your data is not being used without your consent. You can also take the extra step of making use of the security and privacy features available on your device. Taking the time to understand which settings are available, as well as what data is being collected and used, can help you protect your privacy and keep your data safe. 

 

This blog post was co-written with artificial intelligence (AI) as a tool to supplement, enhance, and make suggestions. While AI may assist in the creative and editing process, the thoughts, ideas, opinions, and the finished product are entirely human and original to their author. We strive to ensure accuracy and relevance, but please be aware that AI-generated content may not always fully represent the intent or expertise of human-authored material. 

The post How to Protect Your Privacy From Generative AI appeared first on McAfee Blog.

AI & Your Family: The Wows and Potential Risks

By: McAfee

When we come across the term Artificial Intelligence (AI), our mind often ventures into the realm of sci-fi movies like I, Robot, Matrix, and Ex Machina. We’ve always perceived AI as a futuristic concept, something that’s happening in a galaxy far, far away. However, AI is not only here in our present but has also been a part of our lives for several years in the form of various technological devices and applications.

In our day-to-day lives, we use AI in many instances without even realizing it. AI has permeated into our homes, our workplaces, and is at our fingertips through our smartphones. From cell phones with built-in smart assistants to home assistants that carry out voice commands, from social networks that determine what content we see to music apps that curate playlists based on our preferences, AI has its footprints everywhere. Therefore, it’s integral to not only embrace the wows of this impressive technology but also understand and discuss the potential risks associated with it.

Dig Deeper: Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam

AI in Daily Life: A Blend of Convenience and Intrusion

AI, a term that might sound intimidating to many, is not so when we understand it. It is essentially technology that can be programmed to achieve certain goals without assistance. In simple words, it’s a computer’s ability to predict, process data, evaluate it, and take necessary action. This smart way of performing tasks is being implemented in education, business, manufacturing, retail, transportation, and almost every other industry and cultural sector you can think of.

AI has been doing a lot of good too. For instance, Instagram, the second most popular social network, is now deploying AI technology to detect and combat cyberbullying in both comments and photos. No doubt, AI is having a significant impact on everyday life and is poised to metamorphose the future landscape. However, alongside its benefits, AI has brought forward a set of new challenges and risks. From self-driving cars malfunctioning to potential jobs lost to AI robots, from fake videos and images to privacy breaches, the concerns are real and need timely discussions and preventive measures.

Navigating the Wows and Risks of AI

AI has made it easier for people to face-swap within images and videos, leading to “deep fake” videos that appear remarkably realistic and often go viral. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. While this displays the power of AI technology, it also brings to light the responsibility and critical thinking required when consuming and sharing online content.

Dig Deeper: The Future of Technology: AI, Deepfake, & Connected Devices

Yet another concern raised by AI is privacy breaches. The Cambridge Analytica/Facebook scandal of 2018, alleged to have used AI technology unethically to collect Facebook user data, serves as a reminder that our private (and public) information can be exploited for financial or political gain. Thus, it becomes crucial to discuss and take necessary steps like locking down privacy settings on social networks and being mindful of the information shared in the public feed, including reactions and comments on other content.

McAfee Pro Tip: Cybercriminals employ advanced methods to deceive individuals, propagating sensationalized fake news, creating deceptive catfish dating profiles, and orchestrating harmful impersonations. Recognizing sophisticated AI-generated content can pose a challenge, but certain indicators may signal that you’re encountering a dubious image or interacting with a perpetrator operating behind an AI-generated profile. Know the indicators. 

AI and Cybercrime

With the advent of AI, cybercrime has found a new ally. As per McAfee’s Threats Prediction Report, AI technology might enable hackers to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activities. Moreover, AI-generated phishing emails are scamming people into unknowingly handing over sensitive data.

Dig Deeper: How to Keep Your Data Safe From the Latest Phishing Scam

Bogus emails are becoming highly personalized and can trick intelligent users into clicking malicious links. Given the sophistication of these AI-related scams, it is vital to constantly remind ourselves and our families to be cautious with every click, even those from known sources. The need to be alert and informed cannot be overstressed, especially in times when AI and cybercrime often seem to be two sides of the same coin.

IoT Security Concerns in an AI-Powered World

As homes evolve to be smarter and synced with AI-powered Internet of Things (IoT) products, potential threats have proliferated. These threats are not limited to computers and smartphones but extend to AI-enabled devices such as voice-activated assistants. According to McAfee’s Threat Prediction Report, these IoT devices are particularly susceptible as points of entry for cybercriminals. Other devices at risk, as highlighted by security experts, include routers, and tablets.

This means we need to secure all our connected devices and home internet at its source – the network. Routers provided by your ISP (Internet Security Provider) are often less secure, so consider purchasing your own. As a primary step, ensure that all your devices are updated regularly. More importantly, change the default password on these devices and secure your primary network along with your guest network with strong passwords.

How to Discuss AI with Your Family

Having an open dialogue about AI and its implications is key to navigating through the intricacies of this technology. Parents need to have open discussions with kids about the positives and negatives of AI technology. When discussing fake videos and images, emphasize the importance of critical thinking before sharing any content online. Possibly, even introduce them to the desktop application FakeApp, which allows users to swap faces within images and videos seamlessly, leading to the production of deep fake photos and videos. These can appear remarkably realistic and often go viral.

Privacy is another critical area for discussion. After the Cambridge Analytica/Facebook scandal of 2018, the conversation about privacy breaches has become more significant. These incidents remind us how our private (and public) information can be misused for financial or political gain. Locking down privacy settings, being mindful of the information shared, and understanding the implications of reactions and comments are all topics worth discussing. 

Being Proactive Against AI-Enabled Cybercrime

Awareness and knowledge are the best tools against AI-enabled cybercrime. Making families understand that bogus emails can now be highly personalized and can trick even the most tech-savvy users into clicking malicious links is essential. AI can generate phishing emails, scamming people into handing over sensitive data. In this context, constant reminders to be cautious with every click, even those from known sources, are necessary.

Dig Deeper: Malicious Websites – The Web is a Dangerous Place

The advent of AI has also likely allowed hackers to bypass security measures on networks undetected, leading to data breaches, malware attacks, and ransomware. Therefore, being alert and informed is more than just a precaution – it is a vital safety measure in the digital age.

Final Thoughts

Artificial Intelligence has indeed woven itself into our everyday lives, making things more convenient, efficient, and connected. However, with these advancements come potential risks and challenges. From privacy breaches, and fake content, to AI-enabled cybercrime, the concerns are real and need our full attention. By understanding AI better, having open discussions, and taking appropriate security measures, we can leverage this technology’s immense potential without falling prey to its risks. In our AI-driven world, being informed, aware, and proactive is the key to staying safe and secure.

To safeguard and fortify your online identity, we strongly recommend that you delve into the extensive array of protective features offered by McAfee+. This comprehensive cybersecurity solution is designed to provide you with a robust defense against a wide spectrum of digital threats, ranging from malware and phishing attacks to data breaches and identity theft.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blog.

Get Yourself AI-powered Scam Text Protection That Spots and Block Scams in Real Time

The tables have turned. Now you can use AI to spot and block scam texts before they do you harm. 

You might have heard how scammers have tapped into the power of AI. It provides them with powerful tools to create convincing-looking scams on a massive scale, which can flood your phone with annoying and malicious texts. 

The good news is that we use AI too. And we have for some time to keep you safe. Now, we’ve put AI to use in another powerful way—to put an end to scam texts on your phone. 

Our new Text Scam Detector automatically identifies and alerts you if it detects a dangerous URL in your texts. No more wondering if a package delivery message or bank notification is real or not. Our patented AI technology instantaneously detects malicious links to stop you before you click by sending an alert. And as a second line of defense, it can block risky sites if you accidentally follow a scam link in a text, email, social media, and more. 

Stop scam texts and their malicious links.  

The time couldn’t be more right for this kind of protection. Last year, Americans lost $330 million to text scams alone, more than double the previous year, with an average reported loss of $1,000, according to the Federal Trade Commission. The deluge of these new sophisticated AI-generated scams is making it harder than ever to tell what’s real from what’s fake.  

Which is where our use of AI comes in. With it, you can turn the table on scammers and their AI tools.  

Here’s a closer look at how Text Scam Detector works: 

  • Proactive and automatic protection: Get notifications about a scam text before you even open the message. After you grant permission to scan the URLs in your texts, Text Scam Detector takes charge and will let you know which texts aren’t safe and shouldn’t be opened. 
  • Patented and powerful AI: McAfee’s AI runs in real-time and is constantly analyzing and processing millions of malicious links from around the world to provide better detection. This means Text Scam Detector can protect you from advanced threats including new zero-day threats that haven’t been seen before. McAfee’s AI continually gets smarter to stay ahead of cybercriminals to protect you even better. 
  • Simple and easy to use: When you’re set up, Text Scam Detector goes to work immediately. No copying or pasting or checking whether a text or email is a scam. We do the work for you and the feature will alert you if it detects a dangerous link and blocks risky sites in real time if you accidentally click.   

How do I get Text Scam Detector? 

Text Scam Detector is free for most existing customers, and free to try for new customers. 

Most McAfee customers now have Text Scam Detector available. Simply update your app. There’s no need to purchase or download anything separately. Set up Text Scam Detector in your mobile app, then enable Safe Browsing for extra protection or download our web protection extension for your PC or Mac from the McAfee Protection Center. Some exclusions apply¹. 

For new customers, Text Scam Detector is available as part of a free seven-day trial of McAfee Mobile Security. After the trial period, McAfee Mobile Security is $2.99 a month or $29.99 annually for a one-year subscription. 

As part of our new Text Scam Detector, you can benefit from McAfee’s risky link identification on any platform you use. It can block dangerous links should you accidentally click on one, whether that’s through texts, emails, social media, or a browser. It’s powered by AI as well, and you’ll get it by setting up Safe Browsing on your iOS² or Android device—and by using the WebAdvisor extension on PCs, Macs and iOS. 

Scan the QR code to download Text Scam Detector from the Google App store

 Yes, the tables have turned on scammers. 

AI works in your favor. Just as it has for some time now if you’ve used McAfee for your online protection. Text Scam Detector takes it to a new level. As scammers use AI to create increasingly sophisticated attacks, Text Scam Detector can help you tell what’s real and what’s fake. 

 


  1. Customers currently with McAfee+, McAfee Total Protection, McAfee LiveSafe, and McAfee Mobile Security plans have Text Scam Detector included in their subscription.
  2. Scam text filtering is coming to iOS devices in October.  

The post Get Yourself AI-powered Scam Text Protection That Spots and Block Scams in Real Time appeared first on McAfee Blog.

The Dangers of Artificial Intelligence

By: McAfee

Over the decades, Hollywood has depicted artificial intelligence (AI) in multiple unsettling ways. In their futuristic settings, the AI begins to think for itself, outsmarts the humans, and overthrows society. The resulting dark world is left in a constant barrage of storms – metaphorically and meteorologically. (It’s always so gloomy and rainy in those movies.) 

AI has been a part of manufacturing, shipping, and other industries for several years now. But the emergence of mainstream AI in daily life is stirring debates about its use. Content, art, video, and voice generation tools can make you write like Shakespeare, look like Tom Cruise, or create digital masterpieces in the style of Van Gogh. While it starts out as fun and games, an overreliance or misuse of AI can quickly turn shortcuts into irresponsibly cut corners and pranks into malicious impersonations.   

It’s imperative that everyone interact responsibly with mainstream AI tools like ChatGPT, Bard, Craiyon, and Voice.ai, among others, to avoid these three real dangers of AI that you’re most likely to encounter. 

1. AI Hallucinations

The cool thing about AI is it has advanced to the point where it does think for itself. It’s constantly learning and forming new patterns. The more questions you ask it, the more data it collects and the “smarter” it gets. However, when you ask ChatGPT a question it doesn’t know the answer to, it doesn’t admit that it doesn’t know. Instead, it’ll make up an answer like a precocious schoolchild. This phenomenon is known as an AI hallucination. 

One prime example of an AI hallucination occurred in a New York courtroom. A lawyer presented a lengthy brief that cited multiple law cases to back his point. It turns out the lawyer used ChatGPT to write the entire brief and he didn’t fact check the AI’s work. ChatGPT fabricated its supporting citations, none of which existed. 

AI hallucinations could become a threat to society in that it could populate the internet with false information. Researchers and writers have a duty to thoroughly doublecheck any work they outsource to text generation tools like ChatGPT. When a trustworthy online source publishes content and asserts it as the unbiased truth, readers should be able to trust that the publisher isn’t leading them astray. 

2. Deepfake, AI Art, and Fake News

We all know that you can’t trust everything you read on the internet. Deepfake and AI-generated art deepen the mistrust. Now, you can’t trust everything you see on the internet. 

Deepfake is the digital manipulation of a photo or video to portray an event that never happened or portray a person doing or saying something they never did or said. AI art creates new images using a compilation of published works on the internet to fulfill the prompt. 

Deepfake and AI art become a danger to the public when people use them to supplement fake news reports. Individuals and organizations who feel strongly about their side of an issue may shunt integrity to the side to win new followers to their cause. Fake news is often incendiary and in extreme cases can cause unrest.  

Before you share a “news” article with your social media following or shout about it to others, do some additional research to ensure its accuracy. Additionally, scrutinize the video or image accompanying the story. A deepfake gives itself away when facial expressions or hand gestures don’t look quite right. Also, the face may distort if the hands get too close to it. To spot AI art, think carefully about the context. Is it too fantastic or terrible to be true? Check out the shadows, shading, and the background setting for anomalies. 

3. AI Voice Scams

An emerging dangerous use of AI is cropping up in AI voice scams. Phishers have attempted to get people’s personal details and gain financially over the phone for decades. But now with the help of AI voice tools, their scams are entering a whole new dimension of believability.  

With as little as three seconds of genuine audio, AI voice generators can mimic someone’s voice with up to 95% accuracy. While AI voice generators may add some humor to a comedy deepfake video, criminals are using the technology to seriously frighten people and scam them out of money at the same time. The criminal will impersonate someone using their voice and call the real person’s loved one, saying they’ve been robbed or sustained an accident. McAfee’s Beware the Artificial Imposter report discovered that 77% of people targeted by an AI voice scam lost money as a result. Seven percent of people lost as much as $5,000 to $15,000. 

Use AI Responsibly 

Google’s code of conduct states “Don’t be evil.”2 Because AI relies on input from humans, we have the power to make AI as benevolent or as malevolent as we are. There’s a certain amount of trust involved in the engineers who hold the future of the technology – and if Hollywood is to be believed, the fate of humanity – in their deft hands and brilliant minds. 

“60 Minutes” likened AI’s influence on society on a tier with fire, agriculture, and electricity.3 Because AI never has to take a break, it can learn and teach itself new things every second of every day. It’s advancing quickly and some of the written and visual art it creates can result in some touching expressions of humanity. But AI doesn’t quite understand the emotion it portrays. It’s simply a game of making patterns. Is AI – especially its use in creative pursuits – dimming the spark of humanity? That remains to be seen. 

When used responsibly and in moderation in daily life, it may make us more efficient and inspire us to think in new ways. Be on the lookout for the dangers of AI and use this amazing technology for good. 

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2Alphabet, “Google Code of Conduct”  

360 Minutes, “Artificial Intelligence Revolution 

The post The Dangers of Artificial Intelligence appeared first on McAfee Blog.

❌