FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayThe Hacker News

Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

Cybersecurity researchers have discovered a critical security flaw in an artificial intelligence (AI)-as-a-service provider Replicate that could have allowed threat actors to gain access to proprietary AI models and sensitive information. "Exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate's platform customers,"

(Cyber) Risk = Probability of Occurrence x Damage

Here’s How to Enhance Your Cyber Resilience with CVSS In late 2023, the Common Vulnerability Scoring System (CVSS) v4.0 was unveiled, succeeding the eight-year-old CVSS v3.0, with the aim to enhance vulnerability assessment for both industry and the public. This latest version introduces additional metrics like safety and automation to address criticism of lacking granularity while

6 Mistakes Organizations Make When Deploying Advanced Authentication

Deploying advanced authentication measures is key to helping organizations address their weakest cybersecurity link: their human users. Having some form of 2-factor authentication in place is a great start, but many organizations may not yet be in that spot or have the needed level of authentication sophistication to adequately safeguard organizational data. When deploying

Bitcoin Forensic Analysis Uncovers Money Laundering Clusters and Criminal Proceeds

A forensic analysis of a graph dataset containing transactions on the Bitcoin blockchain has revealed clusters associated with illicit activity and money laundering, including detecting criminal proceeds sent to a crypto exchange and previously unknown wallets belonging to a Russian darknet market. The findings come from Elliptic in collaboration with researchers from the&

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023,

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on

PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

The maintainers of the Python Package Index (PyPI) repository briefly suspended new user sign-ups following an influx of malicious projects uploaded as part of a typosquatting campaign. PyPI said "new project creation and new user registration" was temporarily halted to mitigate what it said was a "malware upload campaign." The incident was resolved 10 hours later, on March 28, 2024, at 12:56

GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

GitHub on Wednesday announced that it's making available a feature called code scanning autofix in public beta for all Advanced Security customers to provide targeted recommendations in an effort to avoid introducing new security issues. "Powered by GitHub Copilot and CodeQL, code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'

New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

Cybersecurity researchers have found that it's possible to compromise the Hugging Face Safetensors conversion service to ultimately hijack the models submitted by users and result in supply chain attacks. "It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted

Three Tips to Protect Your Secrets from AI Accidents

Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the "OWASP Top 10 For Large Language Models," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

Did you know that Network Detection and Response (NDR) has become the most effective technology to detect cyber threats? In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false alerts and efficient threat response. Are you aware of Network Detection and Response (NDR) and how it’s become the most effective technology to detect cyber threats?  NDR massively

Google Open Sources Magika: AI-Powered File Identification Tool

Google has announced that it's open-sourcing Magika, an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI’s most significant impacts

TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks

Continuous integration and continuous delivery (CI/CD) misconfigurations discovered in the open-source TensorFlow machine learning framework could have been exploited to orchestrate supply chain attacks. The misconfigurations could be abused by an attacker to "conduct a supply chain compromise of TensorFlow releases on GitHub and PyPi by compromising TensorFlow's build agents via

This Free Discovery Tool Finds and Mitigates AI-SaaS Risks

Wing Security announced today that it now offers free discovery and a paid tier for automated control over thousands of AI and AI-powered SaaS applications. This will allow companies to better protect their intellectual property (IP) and data against the growing and evolving risks of AI usage. SaaS applications seem to be multiplying by the day, and so does their integration of AI

Getting off the Attack Surface Hamster Wheel: Identity Can Help

IT professionals have developed a sophisticated understanding of the enterprise attack surface – what it is, how to quantify it and how to manage it.  The process is simple: begin by thoroughly assessing the attack surface, encompassing the entire IT environment. Identify all potential entry and exit points where unauthorized access could occur. Strengthen these vulnerable points using

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. β€œThese security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to

Google Unveils RETVec - Gmail's New Defense Against Spam and Malicious Emails

Google has revealed a new multilingual text vectorizer called RETVec (short for Resilient and Efficient Text Vectorizer) to help detect potentially harmful content such as spam and malicious emails in Gmail. "RETVec is trained to be resilient against character-level manipulations including insertion, deletion, typos, homoglyphs, LEET substitution, and more," according to the&

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S.

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.Β  As the threat landscape evolves andΒ generative AI is addedΒ to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of variousΒ AI-based securityΒ offerings is increasingly important β€” and difficult. Asking the right questions can help you spot solutions

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

By: THN
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is anΒ 

New AMBERSQUID Cryptojacking Operation Targets Uncommon AWS Services

By: THN
A novel cloud-native cryptojacking operation has set its eyes on uncommon Amazon Web Services (AWS) offerings such as AWS Amplify, AWS Fargate, and Amazon SageMaker to illicitly mine cryptocurrency. The malicious cyber activity has been codenamedΒ AMBERSQUIDΒ by cloud and container security firm Sysdig. "The AMBERSQUID operation was able to exploit cloud services without triggering the AWS

Everything You Wanted to Know About AI Security but Were Afraid to Ask

There’s been a great deal of AI hype recently, but that doesn’t mean the robots are here to replace us. This article sets the record straight and explains how businesses should approach AI. From musing about self-driving cars to fearing AI bots that could destroy the world, there has been a great deal of AI hype in the past few years. AI has captured our imaginations, dreams, and occasionally,

Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

In today's digital landscape, your business data is more than just numbersβ€”it's a powerhouse. Imagine leveraging this data not only for profit but also for enhanced AI and Machine Learning (ML) threat detection. For companies like Comcast, this isn't a dream. It's reality. Your business comprehends its risks, vulnerabilities, and the unique environment in which it operates. No generic,

The Vulnerability of Zero Trust: Lessons from the Storm 0558 Hack

While IT security managers in companies and public administrations rely on the concept of Zero Trust, APTS (Advanced Persistent Threats) are putting its practical effectiveness to the test. Analysts, on the other hand, understand that Zero Trust can only be achieved with comprehensive insight into one's own network.Β  Just recently, an attack believed to be perpetrated by the Chinese hacker group

Unveiling the Unseen: Identifying Data Exfiltration with Machine Learning

Why Data Exfiltration Detection is Paramount? The world is witnessing an exponential rise in ransomware and data theft employed to extort companies. At the same time, the industry faces numerous critical vulnerabilities in database software and company websites. This evolution paints a dire picture of data exposure and exfiltration that every security leader and team is grappling with. This

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in

Microsoft Introduces GPT-4 AI-Powered Security Copilot Tool to Empower Defenders

Microsoft on TuesdayΒ unveiledΒ Security CopilotΒ in limited preview, marking its continued quest to embed AI-oriented features in an attempt to offer "end-to-end defense at machine speed and scale." Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as aΒ security analysis toolΒ that enables cybersecurity analysts to quickly respond to threats, process signals,
❌