FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
☐ ☆ ✇ Security – Cisco Blog

AI Agent for Color Red

By: Dr. Giannis Tziakouris — May 8th 2025 at 12:00
AI can automate the analysis, generation, testing, and reporting of exploits. It's particularly relevant in penetration testing and ethical hacking scenarios.
☐ ☆ ✇ WIRED

Think Twice Before Creating That ChatGPT Action Figure

By: Kate O'Flaherty — May 1st 2025 at 13:56
People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.
☐ ☆ ✇ WIRED

AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks

By: Dan Goodin, Ars Technica — April 30th 2025 at 19:08
A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code.
☐ ☆ ✇ Security – Cisco Blog

Foundation-sec-8b: Cisco Foundation AI’s First Open-Source Security Model

By: Yaron Singer — April 28th 2025 at 11:55
Foundation AI's first release — Llama-3.1-FoundationAI-SecurityLLM-base-8B — is designed to improve response time, expand capacity, and proactively reduce risk.
☐ ☆ ✇ Security – Cisco Blog

Foundation AI: Robust Intelligence for Cybersecurity

By: Yaron Singer — April 28th 2025 at 11:55
Foundation AI is a Cisco organization dedicated to bridging the gap between the promise of AI and its practical application in cybersecurity.
☐ ☆ ✇ Security – Cisco Blog

Cisco XDR Just Changed the Game, Again

By: AJ Shipley — April 28th 2025 at 11:55
Clear verdict. Decisive action. AI speed. Cisco XDR turns noise into clarity and alerts into action—enabling confident, timely response at scale.
☐ ☆ ✇ Security – Cisco Blog

Does Your SSE Understand User Intent?

By: Prabhu Barathi — April 23rd 2025 at 12:00
Enterprises face several challenges to secure access to AI models and chatbots. Cisco Secure Access extends the security perimeter to address these challenges.
☐ ☆ ✇ WIRED

Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

By: Matt Burgess — April 11th 2025 at 10:30
Some misconfigured AI chatbots are pushing people’s chats to the open web—revealing sexual prompts and conversations that include descriptions of child sexual abuse.
☐ ☆ ✇ Security – Cisco Blog

From Firewalls to AI: The Evolution of Real-Time Cyber Defense

By: Gogulakrishnan Thiyagarajan — April 8th 2025 at 12:00
Explore how AI is transforming cyber defense, evolving from traditional firewalls to real-time intrusion detection systems.
☐ ☆ ✇ Security – Cisco Blog

Cisco Introduces the State of AI Security Report for 2025: Key Developments, Trends, and Predictions in AI Security

By: Emile Antone — March 20th 2025 at 12:00
Cisco is proud to share the State of AI Security report covering key developments in AI security across threat intelligence, policy, and research.
☐ ☆ ✇ WIRED

A Team of Female Founders Is Launching Cloud Security Tech That Could Overhaul AI Protection

By: Lily Hay Newman — February 25th 2025 at 19:43
Cloud “container” defenses have inconsistencies that can give attackers too much access. A new company, Edera, is taking on that challenge and the problem of the male-dominated startup world.
☐ ☆ ✇ Security – Cisco Blog

AI Threat Intelligence Roundup: February 2025

By: Adam Swanda — February 25th 2025 at 13:00
AI threat research is a fundamental part of Cisco’s approach to AI security. Our roundups highlight new findings from both original and third-party sources.
☐ ☆ ✇ WIRED

‘OpenAI’ Job Scam Targeted International Workers Through Telegram

By: Reece Rogers — February 25th 2025 at 11:30
An alleged job scam, led by “Aiden” from “OpenAI,” recruited workers in Bangladesh for months before disappearing overnight, according to FTC complaints obtained by WIRED.
☐ ☆ ✇ WIRED

The National Institute of Standards and Technology Braces for Mass Firings

By: Will Knight, Paresh Dave, Leah Feiger — February 20th 2025 at 20:19
Approximately 500 NIST staffers, including at least three lab directors, are expected to lose their jobs at the standards agency as part of the ongoing DOGE purge, sources tell WIRED.
☐ ☆ ✇ Security – Cisco Blog

Achieve Transformative Network Security With Cisco Hybrid Mesh Firewall

By: Rick Miles — February 12th 2025 at 08:30
Hybrid Mesh Firewall addresses 3 forces: Fine-grained composition & distribution of apps in data centers, complex modern networks & sophisticated threats.
☐ ☆ ✇ Security – Cisco Blog

Cisco and Wiz Collaborate to Enhance Cloud Security: Tackling AI-Generating Threats in Complex IT Infrastructures

By: Rick Miles — February 12th 2025 at 08:30
Cisco is collaborating with Wiz. Together, they aim to improve cloud security for enterprises grappling with AI-generated threats in intricate IT landscapes.
☐ ☆ ✇ Krebs on Security

Experts Flag Security, Privacy Risks in DeepSeek AI App

By: BrianKrebs — February 6th 2025 at 21:12

New mobile apps from the Chinese artificial intelligence (AI) company DeepSeek have remained among the top three “free” downloads for Apple and Google devices since their debut on Jan. 25, 2025. But experts caution that many of DeepSeek’s design choices — such as using hard-coded encryption keys, and sending unencrypted user and device data to Chinese companies — introduce a number of glaring security and privacy risks.

Public interest in the DeepSeek AI chat apps swelled following widespread media reports that the upstart Chinese AI firm had managed to match the abilities of cutting-edge chatbots while using a fraction of the specialized computer chips that leading AI companies rely on. As of this writing, DeepSeek is the third most-downloaded “free” app on the Apple store, and #1 on Google Play.

DeepSeek’s rapid rise caught the attention of the mobile security firm NowSecure, a Chicago-based company that helps clients screen mobile apps for security and privacy threats. In a teardown of the DeepSeek app published today, NowSecure urged organizations to remove the DeepSeek iOS mobile app from their environments, citing security concerns.

NowSecure founder Andrew Hoog said they haven’t yet concluded an in-depth analysis of the DeepSeek app for Android devices, but that there is little reason to believe its basic design would be functionally much different.

Hoog told KrebsOnSecurity there were a number of qualities about the DeepSeek iOS app that suggest the presence of deep-seated security and privacy risks. For starters, he said, the app collects an awful lot of data about the user’s device.

“They are doing some very interesting things that are on the edge of advanced device fingerprinting,” Hoog said, noting that one property of the app tracks the device’s name — which for many iOS devices defaults to the customer’s name followed by the type of iOS device.

The device information shared, combined with the user’s Internet address and data gathered from mobile advertising companies, could be used to deanonymize users of the DeepSeek iOS app, NowSecure warned. The report notes that DeepSeek communicates with Volcengine, a cloud platform developed by ByteDance (the makers of TikTok), although NowSecure said it wasn’t clear if the data is just leveraging ByteDance’s digital transformation cloud service or if the declared information share extends further between the two companies.

Image: NowSecure.

Perhaps more concerning, NowSecure said the iOS app transmits device information “in the clear,” without any encryption to encapsulate the data. This means the data being handled by the app could be intercepted, read, and even modified by anyone who has access to any of the networks that carry the app’s traffic.

“The DeepSeek iOS app globally disables App Transport Security (ATS) which is an iOS platform level protection that prevents sensitive data from being sent over unencrypted channels,” the report observed. “Since this protection is disabled, the app can (and does) send unencrypted data over the internet.”

Hoog said the app does selectively encrypt portions of the responses coming from DeepSeek servers. But they also found it uses an insecure and now deprecated encryption algorithm called 3DES (aka Triple DES), and that the developers had hard-coded the encryption key. That means the cryptographic key needed to decipher those data fields can be extracted from the app itself.

There were other, less alarming security and privacy issues highlighted in the report, but Hoog said he’s confident there are additional, unseen security concerns lurking within the app’s code.

“When we see people exhibit really simplistic coding errors, as you dig deeper there are usually a lot more issues,” Hoog said. “There is virtually no priority around security or privacy. Whether cultural, or mandated by China, or a witting choice, taken together they point to significant lapse in security and privacy controls, and that puts companies at risk.”

Apparently, plenty of others share this view. Axios reported on January 30 that U.S. congressional offices are being warned not to use the app.

“[T]hreat actors are already exploiting DeepSeek to deliver malicious software and infect devices,” read the notice from the chief administrative officer for the House of Representatives. “To mitigate these risks, the House has taken security measures to restrict DeepSeek’s functionality on all House-issued devices.”

TechCrunch reports that Italy and Taiwan have already moved to ban DeepSeek over security concerns. Bloomberg writes that The Pentagon has blocked access to DeepSeek. CNBC says NASA also banned employees from using the service, as did the U.S. Navy.

Beyond security concerns tied to the DeepSeek iOS app, there are indications the Chinese AI company may be playing fast and loose with the data that it collects from and about users. On January 29, researchers at Wiz said they discovered a publicly accessible database linked to DeepSeek that exposed “a significant volume of chat history, backend data and sensitive information, including log streams, API secrets, and operational details.”

“More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world,” Wiz wrote. [Full disclosure: Wiz is currently an advertiser on this website.]

KrebsOnSecurity sought comment on the report from DeepSeek and from Apple. This story will be updated with any substantive replies.

☐ ☆ ✇ Security – Cisco Blog

AI Cyber Threat Intelligence Roundup: January 2025

By: Adam Swanda — February 1st 2025 at 13:00
AI threat research is a fundamental part of Cisco’s approach to AI security. Our roundups highlight new findings from both original and third-party sources.
☐ ☆ ✇ WIRED

DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot

By: Matt Burgess, Lily Hay Newman — January 31st 2025 at 18:30
Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one.
☐ ☆ ✇ WIRED

Exposed DeepSeek Database Revealed Chat Prompts and Internal Data

By: Lily Hay Newman, Matt Burgess — January 29th 2025 at 21:34
China-based DeepSeek has exploded in popularity, drawing greater scrutiny. Case in point: Security researchers found more than 1 million records, including user data and API keys, in an open database.
☐ ☆ ✇ WIRED

DeepSeek’s Popular AI App Is Explicitly Sending US Data to China

By: Matt Burgess, Lily Hay Newman — January 27th 2025 at 22:10
Amid ongoing fears over TikTok, Chinese generative AI platform DeepSeek says it’s sending heaps of US user data straight to its home country, potentially setting the stage for greater scrutiny.
☐ ☆ ✇ Security – Cisco Blog

Cisco AI Defense: Comprehensive Security for Enterprise AI Adoption

By: DJ Sampath — January 15th 2025 at 13:00
Cisco AI Defense is a single, end-to-end solution that helps your organization understand and mitigate risk on both the user and application levels.
☐ ☆ ✇ Security – Cisco Blog

Advancing AI Security and Contributing to CISA’s JCDC AI Efforts 

By: Omar Santos — January 14th 2025 at 15:15
Discover how CISA's new AI Security Incident Collaboration Playbook strengthens AI security and resilience.
☐ ☆ ✇ WIRED

Worry About Misuse of AI, Not Superintelligence

By: Arvind Narayanan, Sayash Kapoor — December 13th 2024 at 14:00
AI risks arise not from AI acting on its own, but because of what people do with it.
☐ ☆ ✇ Security – Cisco Blog

Robust Intelligence, Now Part of Cisco, Recognized as a 2024 Gartner® Cool Vendor™ for AI Security

By: Emile Antone — November 11th 2024 at 13:00
Cisco is excited that Robust Intelligence, a recently acquired AI security startup, is mentioned in the 2024 Gartner Cool Vendors for AI Security report.
☐ ☆ ✇ Security – Cisco Blog

Quality is Priority Zero, Especially for Security

By: Shailaja Shankar — October 21st 2024 at 12:00
Security software can be the first line of defense or the last, and the cost of failure is catastrophic. That's why quality is priority zero for Cisco.
☐ ☆ ✇ Security – Cisco Blog

Using Artificial Intelligence to Catch Sneaky Images in Email

By: Greg Barnes — October 16th 2024 at 12:00
Image-based fraud in email can be challenging to detect and prevent. By leveraging AI, security teams can make inboxes more secure.
☐ ☆ ✇ WIRED

This AI Tool Helped Convict People of Murder. Then Someone Took a Closer Look

By: Todd Feathers — October 15th 2024 at 11:00
Global Intelligence claims its Cybercheck technology can help cops find key evidence to nail a case. But a WIRED investigation reveals the smoking gun often appears far less solid.
☐ ☆ ✇ WIRED

Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram

By: Matt Burgess — October 15th 2024 at 10:30
Bots that “remove clothes” from images have run rampant on the messaging app, allowing people to create nonconsensual deepfake images even as lawmakers and tech companies try to crack down.
☐ ☆ ✇ Security – Cisco Blog

Delivering Modernized Security for Government Agencies: The Vital Role of FedRAMP

By: Shailaja Shankar — October 14th 2024 at 12:00
Cisco has been helping government agencies address their unique security and compliance challenges for decades. We continue to progress with FedRAMP.
☐ ☆ ✇ Security – Cisco Blog

Introducing Cisco’s AI Security Best Practice Portal

By: Omar Santos — October 10th 2024 at 12:00
Cisco's AI Security Portal contains resources to help you secure your AI implementation, whether you're a seasoned professional or new to the field.
☐ ☆ ✇ WIRED

What You Need to Know About Grok AI and Your Privacy

By: Kate O'Flaherty — September 9th 2024 at 10:30
xAI’s generative AI tool, Grok AI, is unhinged compared to its competitors. It’s also scooping up a ton of data that people post on X. Here’s how to keep your posts out of Grok—and why you should.
☐ ☆ ✇ Security – Cisco Blog

Introducing the Coalition for Secure AI (CoSAI)

By: Omar Santos — July 18th 2024 at 15:00
Announcing the launch of the Coalition for Secure AI (CoSAI) to help securely build, deploy, and operate AI systems to mitigate AI-specific security risks.
☐ ☆ ✇ WIRED

AI-Powered Super Soldiers Are More Than Just a Pipe Dream

By: Jared Keller — July 8th 2024 at 10:00
The US military has abandoned its half-century dream of a suit of powered armor in favor of a “hyper enabled operator,” a tactical AI assistant for special operations forces.
☐ ☆ ✇ WIRED

Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

By: Dhruv Mehrotra, Andrew Couts — June 27th 2024 at 22:15
AWS hosted a server linked to the Bezos family- and Nvidia-backed search startup that appears to have been used to scrape the sites of major outlets, prompting an inquiry into potential rules violations.
☐ ☆ ✇ Security – Cisco Blog

Enhancing AI Security Incident Response Through Collaborative Exercises

By: Omar Santos — June 21st 2024 at 12:00
Take-aways from a tabletop exercise led by CISA's Joint Cyber Defense Collaborative (JCDC), which brought together government and industry leaders to enhance our collective ability to respond to AI-related security incidents.
☐ ☆ ✇ Security – Cisco Blog

Security, the cloud, and AI: building powerful outcomes while simplifying your experience

By: Rick Miles — June 7th 2024 at 12:00
Read how Cisco Security Cloud Control prioritizes consolidation of tools and simplification of security policy without compromising your defense.
☐ ☆ ✇ Security – Cisco Blog

Cisco Security at Cisco Live 2024: Innovating at Scale

By: Jeetu Patel — June 4th 2024 at 15:06
No matter how reliable and performant your network is, it doesn’t matter if it’s not secure. To help make the world a safer place, we need to reimagine security.
☐ ☆ ✇ The Hacker News

New Tricks in the Phishing Playbook: Cloudflare Workers, HTML Smuggling, GenAI

By: Newsroom — May 27th 2024 at 09:02
Cybersecurity researchers are alerting of phishing campaigns that abuse Cloudflare Workers to serve phishing sites that are used to harvest users' credentials associated with Microsoft, Gmail, Yahoo!, and cPanel Webmail. The attack method, called transparent phishing or adversary-in-the-middle (AitM) phishing, "uses Cloudflare Workers to act as a reverse proxy server for a
☐ ☆ ✇ The Hacker News

Five Core Tenets Of Highly Effective DevSecOps Practices

By: The Hacker News — May 21st 2024 at 11:33
One of the enduring challenges of building modern applications is to make them more secure without disrupting high-velocity DevOps processes or degrading the developer experience. Today’s cyber threat landscape is rife with sophisticated attacks aimed at all different parts of the software supply chain and the urgency for software-producing organizations to adopt DevSecOps practices that deeply
☐ ☆ ✇ The Hacker News

China-Linked Hackers Adopt Two-Stage Infection Tactic to Deploy Deuterbear RAT

By: Newsroom — May 17th 2024 at 11:20
Cybersecurity researchers have shed more light on a remote access trojan (RAT) known as Deuterbear used by the China-linked BlackTech hacking group as part of a cyber espionage campaign targeting the Asia-Pacific region this year. "Deuterbear, while similar to Waterbear in many ways, shows advancements in capabilities such as including support for shellcode plugins, avoiding handshakes
☐ ☆ ✇ The Hacker News

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

By: The Hacker News — May 10th 2024 at 12:52
Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, "The Future of Threat Hunting is Powered by Generative AI," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will
☐ ☆ ✇ Security – Cisco Blog

Empowering Cybersecurity with AI: The Future of Cisco XDR

By: Siddhant Dash — May 7th 2024 at 07:00
Learn how the Cisco AI Assistant in XDR adds powerful functionality to Cisco XDR that increases defenders efficiency and accuracy.
☐ ☆ ✇ WIRED

A Vast New Data Set Could Supercharge the AI Hunt for Crypto Money Laundering

By: Andy Greenberg — May 1st 2024 at 13:00
Blockchain analysis firm Elliptic, MIT, and IBM have released a new AI model—and the 200-million-transaction dataset it's trained on—that aims to spot the “shape” of bitcoin money laundering.
☐ ☆ ✇ The Hacker News

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

By: Newsroom — April 30th 2024 at 10:36
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&
☐ ☆ ✇ The Hacker News

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

By: Newsroom — April 22nd 2024 at 07:12
Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make their operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant said in its latest report on East Asia hacking groups. The
☐ ☆ ✇ WIRED

The Biggest Deepfake Porn Website Is Now Blocked in the UK

By: Matt Burgess — April 19th 2024 at 16:54
The world's most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
☐ ☆ ✇ WIRED

The Real-Time Deepfake Romance Scams Have Arrived

By: Matt Burgess — April 18th 2024 at 11:00
Watch how smooth-talking scammers known as “Yahoo Boys” use widely available face-swapping tech to carry out elaborate romance scams.
☐ ☆ ✇ The Hacker News

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

By: The Hacker News — April 15th 2024 at 13:30
Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on
☐ ☆ ✇ WIRED

How to Stop Your Data From Being Used to Train AI

By: Matt Burgess, Reece Rogers — April 10th 2024 at 11:30
Some companies let you opt out of allowing your content to be used for generative AI. Here’s how to take back (at least a little) control from ChatGPT, Google’s Gemini, and more.
☐ ☆ ✇ The Hacker News

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

By: Newsroom — April 5th 2024 at 14:08
New research has found that artificial intelligence (AI)-as-a-service providers such as Hugging Face are susceptible to two critical risks that could allow threat actors to escalate privileges, gain cross-tenant access to other customers' models, and even take over the continuous integration and continuous deployment (CI/CD) pipelines. "Malicious models represent a major risk to AI systems,
☐ ☆ ✇ The Hacker News

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

By: Newsroom — March 24th 2024 at 05:38
The North Korea-linked threat actor known as Kimsuky (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According
☐ ☆ ✇ The Hacker News

Generative AI Security - Secure Your Business in a World Powered by LLMs

By: The Hacker News — March 20th 2024 at 11:27
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities,
☐ ☆ ✇ The Hacker News

Crafting and Communicating Your Cybersecurity Strategy for Board Buy-In

By: The Hacker News — March 19th 2024 at 10:37
In an era where digital transformation drives business across sectors, cybersecurity has transcended its traditional operational role to become a cornerstone of corporate strategy and risk management. This evolution demands a shift in how cybersecurity leaders—particularly Chief Information Security Officers (CISOs)—articulate the value and urgency of cybersecurity investments to their boards.&
☐ ☆ ✇ The Hacker News

Ex-Google Engineer Arrested for Stealing AI Technology Secrets for China

By: Newsroom — March 7th 2024 at 10:19
The U.S. Department of Justice (DoJ) announced the indictment of a 38-year-old Chinese national and a California resident for allegedly stealing proprietary information from Google while covertly working for two China-based tech companies. Linwei Ding (aka Leon Ding), a former Google engineer who was arrested on March 6, 2024, "transferred sensitive Google trade secrets and other confidential
☐ ☆ ✇ The Hacker News

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

By: Newsroom — March 5th 2024 at 10:38
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late
☐ ☆ ✇ The Hacker News

From 500 to 5000 Employees - Securing 3rd Party App-Usage in Mid-Market Companies

By: The Hacker News — March 4th 2024 at 11:12
A company’s lifecycle stage, size, and state have a significant impact on its security needs, policies, and priorities. This is particularly true for modern mid-market companies that are either experiencing or have experienced rapid growth. As requirements and tasks continue to accumulate and malicious actors remain active around the clock, budgets are often stagnant at best. Yet, it is crucial
☐ ☆ ✇ The Hacker News

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

By: Newsroom — March 4th 2024 at 09:22
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'
☐ ☆ ✇ WIRED

Here Come the AI Worms

By: Matt Burgess — March 1st 2024 at 09:00
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
☐ ☆ ✇ The Hacker News

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

By: Newsroom — February 23rd 2024 at 11:31
Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team
❌