FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdaySecurity

Unlocking the Privacy Advantage to Build Trust in the Age of AI

The Cisco 2025 Data Privacy Benchmark Study offers insights into the evolving privacy landscape and privacy’s critical role in an AI-centric world.

Trust Through Transparency: Regulation’s Role in Consumer Confidence

The Cisco 2024 Consumer Privacy Survey highlights awareness and attitudes regarding personal data, legislation, Gen AI and data localization requirements.

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, "The Future of Threat Hunting is Powered by Generative AI," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

The North Korea-linked threat actor known as Kimsuky (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According

U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Wednesday announced sanctions against two 46-year-old Russian nationals and the respective companies they own for engaging in cyber influence operations. Ilya Andreevich Gambashidze (Gambashidze), the founder of the Moscow-based company Social Design Agency (SDA), and Nikolai Aleksandrovich Tupikin (Tupikin), the CEO and

Generative AI Security - Secure Your Business in a World Powered by LLMs

Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities,

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team

How to Protect Your Privacy From Generative AI

With the rise of artificial intelligence (AI) and machine learning, concerns about the privacy of personal data have reached an all-time high. Generative AI is a type of AI that can generate new data from existing data, such as images, videos, and text. This technology can be used for a variety of purposes, from facial recognition to creating β€œdeepfakes” and manipulating public opinion. As a result, it’s important to be aware of the potential risks that generative AI poses to your privacy. Β 

In this blog post, we’ll discuss how to protect your privacy from generative AI.Β 

1. Understand what generative AI is and how it works.

Generative AI is a type of AI that uses existing data to generate new data. It’s usually used for things like facial recognition, speech recognition, and image and video generation. This technology can be used for both good and bad purposes, so it’s important to understand how it works and the potential risks it poses to your privacy.Β 

2. Be aware of the potential risks.

Generative AI can be used to create deepfakes, which are fake images or videos that are generated using existing data. This technology can be used for malicious purposes, such as manipulating public opinion, identity theft, and spreading false information. It’s important to be aware of the potential risks that generative AI poses to your privacy.Β 

3. Be careful with the data you share online.

Generative AI uses existing data to generate new data, so it’s important to be aware of what data you’re sharing online. Be sure to only share data that you’re comfortable with and be sure to use strong passwords and two-factor authentication whenever possible.Β 

4. Use privacy-focused tools.

There are a number of privacy-focused tools available that can help protect your data from generative AI. These include tools like privacy-focused browsers, VPNs, and encryption tools. It’s important to understand how these tools work and how they can help protect your data.Β 

Β 5. Stay informed.

It’s important to stay up-to-date on the latest developments in generative AI and privacy. Follow trusted news sources and keep an eye out for changes in the law that could affect your privacy.Β 

By following these tips, you can help protect your privacy from generative AI. It’s important to be aware of the potential risks that this technology poses and to take steps to protect yourself and your data.Β 

Of course, the most important step is to be aware and informed. Research and organizations that are using generative AI and make sure you understand how they use your data. Be sure to read the terms and conditions of any contracts you sign and be aware of any third parties that may have access to your data. Additionally, be sure to look out for notifications of changes in privacy policies and take the time to understand any changes that could affect you.Β 

Finally, make sure to regularly check your accounts and reports to make sure that your data is not being used without your consent. You can also take the extra step of making use of the security and privacy features available on your device. Taking the time to understand which settings are available, as well as what data is being collected and used, can help you protect your privacy and keep your data safe.Β 

Β 

This blog post was co-written with artificial intelligence (AI) as a tool to supplement, enhance, and make suggestions. While AI may assist in the creative and editing process, the thoughts, ideas, opinions, and the finished product are entirely human and original to their author. We strive to ensure accuracy and relevance, but please be aware that AI-generated content may not always fully represent the intent or expertise of human-authored material.Β 

The post How to Protect Your Privacy From Generative AI appeared first on McAfee Blog.

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday. It also said it

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI’s most significant impacts

There is a Ransomware Armageddon Coming for Us All

Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who’s-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts. These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new

Generative AI Security: Preventing Microsoft Copilot Data Exposure

Microsoft Copilot has been called one of the most powerful productivity tools on the planet. Copilot is an AI assistant that lives inside each of your Microsoft 365 apps β€” Word, Excel, PowerPoint, Teams, Outlook, and so on. Microsoft's dream is to take the drudgery out of daily work and let humans focus on being creative problem-solvers. What makes Copilot a different beast than ChatGPT and

7 Uses for Generative AI to Enhance Security Operations

Welcome to a world where Generative AI revolutionizes the field of cybersecurity. Generative AI refers to the use of artificial intelligence (AI) techniques to generate or create new data, such as images, text, or sounds. It has gained significant attention in recent years due to its ability to generate realistic and diverse outputs. When it comes to security operations, Generative AI can

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.Β  As the threat landscape evolves andΒ generative AI is addedΒ to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of variousΒ AI-based securityΒ offerings is increasingly important β€” and difficult. Asking the right questions can help you spot solutions

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort toΒ bolster AI safety and security. "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations,

"I Had a Dream" and Generative AI Jailbreaks

"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published byΒ Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet

Live Webinar: Overcoming Generative AI Data Leakage Risks

As the adoption of generative AI tools, like ChatGPT, continues to surge, so does the risk of data exposure. According to Gartner’s "Emerging Tech: Top 4 Security Risks of GenAI" report, privacy and data security is one of the four major emerging risks within generative AI.Β A new webinarΒ featuring a multi-time Fortune 100 CISO and the CEO of LayerX, a browser extension solution, delves into this
❌