FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ WIRED

The National Institute of Standards and Technology Braces for Mass Firings

By: Will Knight, Paresh Dave, Leah Feiger — February 20th 2025 at 20:19
Approximately 500 NIST staffers, including at least three lab directors, are expected to lose their jobs at the standards agency as part of the ongoing DOGE purge, sources tell WIRED.
☐ ☆ ✇ Security – Cisco Blog

Achieve Transformative Network Security With Cisco Hybrid Mesh Firewall

By: Rick Miles — February 12th 2025 at 08:30
Hybrid Mesh Firewall addresses 3 forces: Fine-grained composition & distribution of apps in data centers, complex modern networks & sophisticated threats.
☐ ☆ ✇ Security – Cisco Blog

Cisco and Wiz Collaborate to Enhance Cloud Security: Tackling AI-Generating Threats in Complex IT Infrastructures

By: Rick Miles — February 12th 2025 at 08:30
Cisco is collaborating with Wiz. Together, they aim to improve cloud security for enterprises grappling with AI-generated threats in intricate IT landscapes.
☐ ☆ ✇ Krebs on Security

Experts Flag Security, Privacy Risks in DeepSeek AI App

By: BrianKrebs — February 6th 2025 at 21:12

New mobile apps from the Chinese artificial intelligence (AI) company DeepSeek have remained among the top three “free” downloads for Apple and Google devices since their debut on Jan. 25, 2025. But experts caution that many of DeepSeek’s design choices — such as using hard-coded encryption keys, and sending unencrypted user and device data to Chinese companies — introduce a number of glaring security and privacy risks.

Public interest in the DeepSeek AI chat apps swelled following widespread media reports that the upstart Chinese AI firm had managed to match the abilities of cutting-edge chatbots while using a fraction of the specialized computer chips that leading AI companies rely on. As of this writing, DeepSeek is the third most-downloaded “free” app on the Apple store, and #1 on Google Play.

DeepSeek’s rapid rise caught the attention of the mobile security firm NowSecure, a Chicago-based company that helps clients screen mobile apps for security and privacy threats. In a teardown of the DeepSeek app published today, NowSecure urged organizations to remove the DeepSeek iOS mobile app from their environments, citing security concerns.

NowSecure founder Andrew Hoog said they haven’t yet concluded an in-depth analysis of the DeepSeek app for Android devices, but that there is little reason to believe its basic design would be functionally much different.

Hoog told KrebsOnSecurity there were a number of qualities about the DeepSeek iOS app that suggest the presence of deep-seated security and privacy risks. For starters, he said, the app collects an awful lot of data about the user’s device.

“They are doing some very interesting things that are on the edge of advanced device fingerprinting,” Hoog said, noting that one property of the app tracks the device’s name — which for many iOS devices defaults to the customer’s name followed by the type of iOS device.

The device information shared, combined with the user’s Internet address and data gathered from mobile advertising companies, could be used to deanonymize users of the DeepSeek iOS app, NowSecure warned. The report notes that DeepSeek communicates with Volcengine, a cloud platform developed by ByteDance (the makers of TikTok), although NowSecure said it wasn’t clear if the data is just leveraging ByteDance’s digital transformation cloud service or if the declared information share extends further between the two companies.

Image: NowSecure.

Perhaps more concerning, NowSecure said the iOS app transmits device information “in the clear,” without any encryption to encapsulate the data. This means the data being handled by the app could be intercepted, read, and even modified by anyone who has access to any of the networks that carry the app’s traffic.

“The DeepSeek iOS app globally disables App Transport Security (ATS) which is an iOS platform level protection that prevents sensitive data from being sent over unencrypted channels,” the report observed. “Since this protection is disabled, the app can (and does) send unencrypted data over the internet.”

Hoog said the app does selectively encrypt portions of the responses coming from DeepSeek servers. But they also found it uses an insecure and now deprecated encryption algorithm called 3DES (aka Triple DES), and that the developers had hard-coded the encryption key. That means the cryptographic key needed to decipher those data fields can be extracted from the app itself.

There were other, less alarming security and privacy issues highlighted in the report, but Hoog said he’s confident there are additional, unseen security concerns lurking within the app’s code.

“When we see people exhibit really simplistic coding errors, as you dig deeper there are usually a lot more issues,” Hoog said. “There is virtually no priority around security or privacy. Whether cultural, or mandated by China, or a witting choice, taken together they point to significant lapse in security and privacy controls, and that puts companies at risk.”

Apparently, plenty of others share this view. Axios reported on January 30 that U.S. congressional offices are being warned not to use the app.

“[T]hreat actors are already exploiting DeepSeek to deliver malicious software and infect devices,” read the notice from the chief administrative officer for the House of Representatives. “To mitigate these risks, the House has taken security measures to restrict DeepSeek’s functionality on all House-issued devices.”

TechCrunch reports that Italy and Taiwan have already moved to ban DeepSeek over security concerns. Bloomberg writes that The Pentagon has blocked access to DeepSeek. CNBC says NASA also banned employees from using the service, as did the U.S. Navy.

Beyond security concerns tied to the DeepSeek iOS app, there are indications the Chinese AI company may be playing fast and loose with the data that it collects from and about users. On January 29, researchers at Wiz said they discovered a publicly accessible database linked to DeepSeek that exposed “a significant volume of chat history, backend data and sensitive information, including log streams, API secrets, and operational details.”

“More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world,” Wiz wrote. [Full disclosure: Wiz is currently an advertiser on this website.]

KrebsOnSecurity sought comment on the report from DeepSeek and from Apple. This story will be updated with any substantive replies.

☐ ☆ ✇ Security – Cisco Blog

AI Cyber Threat Intelligence Roundup: January 2025

By: Adam Swanda — February 1st 2025 at 13:00
AI threat research is a fundamental part of Cisco’s approach to AI security. Our roundups highlight new findings from both original and third-party sources.
☐ ☆ ✇ WIRED

DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

By: Matt Burgess, Lily Hay Newman — January 31st 2025 at 18:30
Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
☐ ☆ ✇ WIRED

Exposed DeepSeek Database Revealed Chat Prompts and Internal Data

By: Lily Hay Newman, Matt Burgess — January 29th 2025 at 21:34
China-based DeepSeek has exploded in popularity, drawing greater scrutiny. Case in point: Security researchers found more than 1 million records, including user data and API keys, in an open database.
☐ ☆ ✇ WIRED

DeepSeek’s Popular AI App Is Explicitly Sending US Data to China

By: Matt Burgess, Lily Hay Newman — January 27th 2025 at 22:10
Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny.
☐ ☆ ✇ Security – Cisco Blog

Cisco AI Defense: Comprehensive Security for Enterprise AI Adoption

By: DJ Sampath — January 15th 2025 at 13:00
Cisco AI Defense is a single, end-to-end solution that helps your organization understand and mitigate risk on both the user and application levels.
☐ ☆ ✇ Security – Cisco Blog

Advancing AI Security and Contributing to CISA’s JCDC AI Efforts 

By: Omar Santos — January 14th 2025 at 15:15
Discover how CISA's new AI Security Incident Collaboration Playbook strengthens AI security and resilience.
☐ ☆ ✇ WIRED

Worry About Misuse of AI, Not Superintelligence

By: Arvind Narayanan, Sayash Kapoor — December 13th 2024 at 14:00
AI risks arise not from AI acting on its own, but because of what people do with it.
☐ ☆ ✇ Security – Cisco Blog

Robust Intelligence, Now Part of Cisco, Recognized as a 2024 Gartner® Cool Vendor™ for AI Security

By: Emile Antone — November 11th 2024 at 13:00
Cisco is excited that Robust Intelligence, a recently acquired AI security startup, is mentioned in the 2024 Gartner Cool Vendors for AI Security report.
☐ ☆ ✇ Security – Cisco Blog

Quality is Priority Zero, Especially for Security

By: Shailaja Shankar — October 21st 2024 at 12:00
Security software can be the first line of defense or the last, and the cost of failure is catastrophic. That's why quality is priority zero for Cisco.
☐ ☆ ✇ Security – Cisco Blog

Using Artificial Intelligence to Catch Sneaky Images in Email

By: Greg Barnes — October 16th 2024 at 12:00
Image-based fraud in email can be challenging to detect and prevent. By leveraging AI, security teams can make inboxes more secure.
☐ ☆ ✇ WIRED

This AI Tool Helped Convict People of Murder. Then Someone Took a Closer Look

By: Todd Feathers — October 15th 2024 at 11:00
Global Intelligence claims its Cybercheck technology can help cops find key evidence to nail a case. But a WIRED investigation reveals the smoking gun often appears far less solid.
☐ ☆ ✇ WIRED

Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram

By: Matt Burgess — October 15th 2024 at 10:30
Bots that “remove clothes” from images have run rampant on the messaging app, allowing people to create nonconsensual deepfake images even as lawmakers and tech companies try to crack down.
☐ ☆ ✇ Security – Cisco Blog

Delivering Modernized Security for Government Agencies: The Vital Role of FedRAMP

By: Shailaja Shankar — October 14th 2024 at 12:00
Cisco has been helping government agencies address their unique security and compliance challenges for decades. We continue to progress with FedRAMP.
☐ ☆ ✇ Security – Cisco Blog

Introducing Cisco’s AI Security Best Practice Portal

By: Omar Santos — October 10th 2024 at 12:00
Cisco's AI Security Portal contains resources to help you secure your AI implementation, whether you're a seasoned professional or new to the field.
☐ ☆ ✇ WIRED

What You Need to Know About Grok AI and Your Privacy

By: Kate O'Flaherty — September 9th 2024 at 10:30
xAI’s generative AI tool, Grok AI, is unhinged compared to its competitors. It’s also scooping up a ton of data that people post on X. Here’s how to keep your posts out of Grok—and why you should.
☐ ☆ ✇ Security – Cisco Blog

Introducing the Coalition for Secure AI (CoSAI)

By: Omar Santos — July 18th 2024 at 15:00
Announcing the launch of the Coalition for Secure AI (CoSAI) to help securely build, deploy, and operate AI systems to mitigate AI-specific security risks.
☐ ☆ ✇ WIRED

AI-Powered Super Soldiers Are More Than Just a Pipe Dream

By: Jared Keller — July 8th 2024 at 10:00
The US military has abandoned its half-century dream of a suit of powered armor in favor of a “hyper enabled operator,” a tactical AI assistant for special operations forces.
☐ ☆ ✇ WIRED

Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

By: Dhruv Mehrotra, Andrew Couts — June 27th 2024 at 22:15
AWS hosted a server linked to the Bezos family- and Nvidia-backed search startup that appears to have been used to scrape the sites of major outlets, prompting an inquiry into potential rules violations.
☐ ☆ ✇ Security – Cisco Blog

Enhancing AI Security Incident Response Through Collaborative Exercises

By: Omar Santos — June 21st 2024 at 12:00
Take-aways from a tabletop exercise led by CISA's Joint Cyber Defense Collaborative (JCDC), which brought together government and industry leaders to enhance our collective ability to respond to AI-related security incidents.
☐ ☆ ✇ Security – Cisco Blog

Security, the cloud, and AI: building powerful outcomes while simplifying your experience

By: Rick Miles — June 7th 2024 at 12:00
Read how Cisco Security Cloud Control prioritizes consolidation of tools and simplification of security policy without compromising your defense.
☐ ☆ ✇ Security – Cisco Blog

Cisco Security at Cisco Live 2024: Innovating at Scale

By: Jeetu Patel — June 4th 2024 at 15:06
No matter how reliable and performant your network is, it doesn’t matter if it’s not secure. To help make the world a safer place, we need to reimagine security.
☐ ☆ ✇ The Hacker News

New Tricks in the Phishing Playbook: Cloudflare Workers, HTML Smuggling, GenAI

By: Newsroom — May 27th 2024 at 09:02
Cybersecurity researchers are alerting of phishing campaigns that abuse Cloudflare Workers to serve phishing sites that are used to harvest users' credentials associated with Microsoft, Gmail, Yahoo!, and cPanel Webmail. The attack method, called transparent phishing or adversary-in-the-middle (AitM) phishing, "uses Cloudflare Workers to act as a reverse proxy server for a
☐ ☆ ✇ The Hacker News

Five Core Tenets Of Highly Effective DevSecOps Practices

By: The Hacker News — May 21st 2024 at 11:33
One of the enduring challenges of building modern applications is to make them more secure without disrupting high-velocity DevOps processes or degrading the developer experience. Today’s cyber threat landscape is rife with sophisticated attacks aimed at all different parts of the software supply chain and the urgency for software-producing organizations to adopt DevSecOps practices that deeply
☐ ☆ ✇ The Hacker News

China-Linked Hackers Adopt Two-Stage Infection Tactic to Deploy Deuterbear RAT

By: Newsroom — May 17th 2024 at 11:20
Cybersecurity researchers have shed more light on a remote access trojan (RAT) known as Deuterbear used by the China-linked BlackTech hacking group as part of a cyber espionage campaign targeting the Asia-Pacific region this year. "Deuterbear, while similar to Waterbear in many ways, shows advancements in capabilities such as including support for shellcode plugins, avoiding handshakes
☐ ☆ ✇ The Hacker News

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

By: The Hacker News — May 10th 2024 at 12:52
Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, "The Future of Threat Hunting is Powered by Generative AI," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will
☐ ☆ ✇ Security – Cisco Blog

Empowering Cybersecurity with AI: The Future of Cisco XDR

By: Siddhant Dash — May 7th 2024 at 07:00
Learn how the Cisco AI Assistant in XDR adds powerful functionality to Cisco XDR that increases defenders efficiency and accuracy.
☐ ☆ ✇ WIRED

A Vast New Data Set Could Supercharge the AI Hunt for Crypto Money Laundering

By: Andy Greenberg — May 1st 2024 at 13:00
Blockchain analysis firm Elliptic, MIT, and IBM have released a new AI model—and the 200-million-transaction dataset it's trained on—that aims to spot the “shape” of bitcoin money laundering.
☐ ☆ ✇ The Hacker News

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

By: Newsroom — April 30th 2024 at 10:36
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&
☐ ☆ ✇ The Hacker News

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

By: Newsroom — April 22nd 2024 at 07:12
Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make their operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant said in its latest report on East Asia hacking groups. The
☐ ☆ ✇ WIRED

The Biggest Deepfake Porn Website Is Now Blocked in the UK

By: Matt Burgess — April 19th 2024 at 16:54
The world's most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
☐ ☆ ✇ WIRED

The Real-Time Deepfake Romance Scams Have Arrived

By: Matt Burgess — April 18th 2024 at 11:00
Watch how smooth-talking scammers known as “Yahoo Boys” use widely available face-swapping tech to carry out elaborate romance scams.
☐ ☆ ✇ The Hacker News

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

By: The Hacker News — April 15th 2024 at 13:30
Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on
☐ ☆ ✇ WIRED

How to Stop Your Data From Being Used to Train AI

By: Matt Burgess, Reece Rogers — April 10th 2024 at 11:30
Some companies let you opt out of allowing your content to be used for generative AI. Here’s how to take back (at least a little) control from ChatGPT, Google’s Gemini, and more.
☐ ☆ ✇ The Hacker News

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

By: Newsroom — April 5th 2024 at 14:08
New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines. "Malicious models represent a major risk to AI systems,
☐ ☆ ✇ The Hacker News

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

By: Newsroom — March 24th 2024 at 05:38
The North Korea-linked threat actor known as Kimsuky (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According
☐ ☆ ✇ The Hacker News

Generative AI Security - Secure Your Business in a World Powered by LLMs

By: The Hacker News — March 20th 2024 at 11:27
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities,
☐ ☆ ✇ The Hacker News

Crafting and Communicating Your Cybersecurity Strategy for Board Buy-In

By: The Hacker News — March 19th 2024 at 10:37
In an era where digital transformation drives business across sectors, cybersecurity has transcended its traditional operational role to become a cornerstone of corporate strategy and risk management. This evolution demands a shift in how cybersecurity leaders—particularly Chief Information Security Officers (CISOs)—articulate the value and urgency of cybersecurity investments to their boards.&
☐ ☆ ✇ The Hacker News

Ex-Google Engineer Arrested for Stealing AI Technology Secrets for China

By: Newsroom — March 7th 2024 at 10:19
The U.S. Department of Justice (DoJ) announced the indictment of a 38-year-old Chinese national and a California resident for allegedly stealing proprietary information from Google while covertly working for two China-based tech companies. Linwei Ding (aka Leon Ding), a former Google engineer who was arrested on March 6, 2024, "transferred sensitive Google trade secrets and other confidential
☐ ☆ ✇ The Hacker News

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

By: Newsroom — March 5th 2024 at 10:38
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late
☐ ☆ ✇ The Hacker News

From 500 to 5000 Employees - Securing 3rd Party App-Usage in Mid-Market Companies

By: The Hacker News — March 4th 2024 at 11:12
A company’s lifecycle stage, size, and state have a significant impact on its security needs, policies, and priorities. This is particularly true for modern mid-market companies that are either experiencing or have experienced rapid growth. As requirements and tasks continue to accumulate and malicious actors remain active around the clock, budgets are often stagnant at best. Yet, it is crucial
☐ ☆ ✇ The Hacker News

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

By: Newsroom — March 4th 2024 at 09:22
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'
☐ ☆ ✇ WIRED

Here Come the AI Worms

By: Matt Burgess — March 1st 2024 at 09:00
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
☐ ☆ ✇ The Hacker News

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

By: Newsroom — February 23rd 2024 at 11:31
Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team
☐ ☆ ✇ The Hacker News

Google Open Sources Magika: AI-Powered File Identification Tool

By: Newsroom — February 17th 2024 at 07:26
Google has announced that it's open-sourcing Magika, an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content
☐ ☆ ✇ The Hacker News

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

By: Newsroom — February 14th 2024 at 14:39
Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its
☐ ☆ ✇ McAfee Blogs

How to Protect Your Privacy From Generative AI

By: Jasdev Dhaliwal — June 28th 2023 at 13:31

With the rise of artificial intelligence (AI) and machine learning, concerns about the privacy of personal data have reached an all-time high. Generative AI is a type of AI that can generate new data from existing data, such as images, videos, and text. This technology can be used for a variety of purposes, from facial recognition to creating “deepfakes” and manipulating public opinion. As a result, it’s important to be aware of the potential risks that generative AI poses to your privacy.  

In this blog post, we’ll discuss how to protect your privacy from generative AI. 

1. Understand what generative AI is and how it works.

Generative AI is a type of AI that uses existing data to generate new data. It’s usually used for things like facial recognition, speech recognition, and image and video generation. This technology can be used for both good and bad purposes, so it’s important to understand how it works and the potential risks it poses to your privacy. 

2. Be aware of the potential risks.

Generative AI can be used to create deepfakes, which are fake images or videos that are generated using existing data. This technology can be used for malicious purposes, such as manipulating public opinion, identity theft, and spreading false information. It’s important to be aware of the potential risks that generative AI poses to your privacy. 

3. Be careful with the data you share online.

Generative AI uses existing data to generate new data, so it’s important to be aware of what data you’re sharing online. Be sure to only share data that you’re comfortable with and be sure to use strong passwords and two-factor authentication whenever possible. 

4. Use privacy-focused tools.

There are a number of privacy-focused tools available that can help protect your data from generative AI. These include tools like privacy-focused browsers, VPNs, and encryption tools. It’s important to understand how these tools work and how they can help protect your data. 

 5. Stay informed.

It’s important to stay up-to-date on the latest developments in generative AI and privacy. Follow trusted news sources and keep an eye out for changes in the law that could affect your privacy. 

By following these tips, you can help protect your privacy from generative AI. It’s important to be aware of the potential risks that this technology poses and to take steps to protect yourself and your data. 

Of course, the most important step is to be aware and informed. Research and organizations that are using generative AI and make sure you understand how they use your data. Be sure to read the terms and conditions of any contracts you sign and be aware of any third parties that may have access to your data. Additionally, be sure to look out for notifications of changes in privacy policies and take the time to understand any changes that could affect you. 

Finally, make sure to regularly check your accounts and reports to make sure that your data is not being used without your consent. You can also take the extra step of making use of the security and privacy features available on your device. Taking the time to understand which settings are available, as well as what data is being collected and used, can help you protect your privacy and keep your data safe. 

 

This blog post was co-written with artificial intelligence (AI) as a tool to supplement, enhance, and make suggestions. While AI may assist in the creative and editing process, the thoughts, ideas, opinions, and the finished product are entirely human and original to their author. We strive to ensure accuracy and relevance, but please be aware that AI-generated content may not always fully represent the intent or expertise of human-authored material. 

The post How to Protect Your Privacy From Generative AI appeared first on McAfee Blog.

☐ ☆ ✇ WIRED

London Underground Is Testing Real-Time AI Surveillance Tools to Spot Crime

By: Matt Burgess — February 8th 2024 at 17:55
In a test at one station, Transport for London used a computer vision system to try and detect crime and weapons, people falling on the tracks, and fare dodgers, documents obtained by WIRED show.
☐ ☆ ✇ WIRED

US Lawmakers Tell DOJ to Quit Blindly Funding ‘Predictive’ Police Tools

By: Dell Cameron — January 29th 2024 at 16:19
Members of Congress say the DOJ is funding the use of AI tools that further discriminatory policing practices. They're demanding higher standards for federal grants.
☐ ☆ ✇ The Hacker News

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

By: The Hacker News — January 29th 2024 at 11:11
In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI’s most significant impacts
☐ ☆ ✇ The Hacker News

Cyber Threat Landscape: 7 Key Findings and Upcoming Trends for 2024

By: The Hacker News — January 25th 2024 at 11:30
The 2023/2024 Axur Threat Landscape Report provides a comprehensive analysis of the latest cyber threats. The information combines data from the platform's surveillance of the Surface, Deep, and Dark Web with insights derived from the in-depth research and investigations conducted by the Threat Intelligence team. Discover the full scope of digital threats in the Axur Report 2023/2024. Overview
☐ ☆ ✇ The Hacker News

Microsoft's Top Execs' Emails Breached in Sophisticated Russia-Linked APT Attack

By: Newsroom — January 20th 2024 at 03:11
Microsoft on Friday revealed that it was the target of a nation-state attack on its corporate systems that resulted in the theft of emails and attachments from senior executives and other individuals in the company's cybersecurity and legal departments. The Windows maker attributed the attack to a Russian advanced persistent threat (APT) group it tracks as Midnight Blizzard (formerly
☐ ☆ ✇ The Hacker News

There is a Ransomware Armageddon Coming for Us All

By: The Hacker News — January 11th 2024 at 11:43
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who’s-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars
☐ ☆ ✇ The Hacker News

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

By: Newsroom — January 8th 2024 at 07:53
The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. “These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to
☐ ☆ ✇ WIRED

The Most Dangerous People on the Internet in 2023

By: WIRED Staff — December 28th 2023 at 12:00
From Sam Altman and Elon Musk to ransomware gangs and state-backed hackers, these are the individuals and groups that spent this year disrupting the world we know it.
☐ ☆ ✇ The Hacker News

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

By: Newsroom — December 5th 2023 at 14:58
The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts. These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new
☐ ☆ ✇ WIRED

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

By: Will Knight — December 5th 2023 at 11:00
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.
❌