Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats.
Join us for an exciting webinar, "The Future of Threat Hunting is Powered by Generative AI," where you'll explore how AI tools are shaping the future of cybersecurity defenses.
During the session, Censys Security Researcher Aidan Holland will
The North Korea-linked threat actor known as Kimsuky (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data.
Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe.
According
The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Wednesday announced sanctions against two 46-year-old Russian nationals and the respective companies they own for engaging in cyber influence operations.
Ilya Andreevich Gambashidze (Gambashidze), the founder of the Moscow-based company Social Design Agency (SDA), and Nikolai Aleksandrovich Tupikin (Tupikin), the CEO and
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI.
The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities,
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules.
"Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform.
These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said.
"The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'
Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems.
The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team
Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region.
"The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday.
It also said it
In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AIβs most significant impacts
Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop
The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a whoβs-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars
The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts.
These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new
Microsoft Copilot has been called one of the most powerful productivity tools on the planet.
Copilot is an AI assistant that lives inside each of your Microsoft 365 apps β Word, Excel, PowerPoint, Teams, Outlook, and so on. Microsoft's dream is to take the drudgery out of daily work and let humans focus on being creative problem-solvers.
What makes Copilot a different beast than ChatGPT and
Welcome to a world where Generative AI revolutionizes the field of cybersecurity.
Generative AI refers to the use of artificial intelligence (AI) techniques to generate or create new data, such as images, text, or sounds. It has gained significant attention in recent years due to its ability to generate realistic and diverse outputs.
When it comes to security operations, Generative AI can
Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.Β
As the threat landscape evolves andΒ generative AI is addedΒ to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of variousΒ AI-based securityΒ offerings is increasingly important β and difficult. Asking the right questions can help you spot solutions
Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort toΒ bolster AI safety and security.
"Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or
Recently, the cybersecurity landscape has been confronted with a daunting new reality β the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations,
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published byΒ Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet
As the adoption of generative AI tools, like ChatGPT, continues to surge, so does the risk of data exposure. According to Gartnerβs "Emerging Tech: Top 4 Security Risks of GenAI" report, privacy and data security is one of the four major emerging risks within generative AI.Β A new webinarΒ featuring a multi-time Fortune 100 CISO and the CEO of LayerX, a browser extension solution, delves into this
With generative artificial intelligence (AI) becoming all the rage these days, it's perhaps not surprising that the technology has been repurposed by malicious actors to their own advantage, enabling avenues for accelerated cybercrime.
According to findings from SlashNext, a new generative AI cybercrime tool calledΒ WormGPTΒ has been advertised on underground forums as a way for adversaries to
Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception.
Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023Β generative AI survey of 1,000 executivesΒ