FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday. It also said it

There is a Ransomware Armageddon Coming for Us All

Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who’s-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars

Vietnamese Hackers Using New Delphi-Powered Malware to Target Indian Marketers

The Vietnamese threat actors behind the Ducktail stealer malware have been linked to a new campaign that ran between March and early October 2023, targeting marketing professionals in India with an aim to hijack Facebook business accounts. "An important feature that sets it apart is that, unlike previous campaigns, which relied on .NET applications, this one used Delphi as the programming

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to bolster AI safety and security. "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or

How to Guard Your Data from Exposure in ChatGPT

ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are

"I Had a Dream" and Generative AI Jailbreaks

"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

By: THN
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an 

How to Prevent ChatGPT From Stealing Your Content & Traffic

ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging

Chaos - Origin IP Scanning Utility Developed With ChatGPT

By: Zion3R


chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
_..._
.-'` `'-.
__|___________|__
\ /
`._ CHAOS _.'
`-------`
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/_____________________\\
CHAtgpt Origin-ip Scanner
_______ _______ _______ _______ _______
|\\ /|\\ /|\\ /|\\ /|\\/|
| +---+ | +---+ | +---+ | +---+ | +---+ |
| |H | | |U | | |M | | |A | | |N | |
| |U | | |S | | |A | | |N | | |C | |
| |M | | |E | | |N | | |D | | |O | |
| |A | | |R | | |C | | | | | |L | |
| +---+ | +---+ | +---+ | +---+ | +---+ |
|/_____|\\_____|\\_____|\\_____|\\_____\\

Origin IP Scanner developed with ChatGPT
cha*os (n): complete disorder and confusion
(ver: 0.9.4)


Features

  • Threaded for performance gains
  • Real-time status updates and progress bars, nice for large scans ;)
  • Flexible user options for various scenarios & constraints
  • Dataset reduction for improved scan times
  • Easy to use CSV output

Installation

  1. Download / clone / unzip / whatever
  2. cd path/to/chaos
  3. pip3 install -U pip setuptools virtualenv
  4. virtualenv env
  5. source env/bin/activate
  6. (env) pip3 install -U -r ./requirements.txt
  7. (env) ./chaos.py -h

Options

-h, --help            show this help message and exit
-f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
-i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
-a AGENT, --agent AGENT
User-Agent header value for requests
-C, --csv Append CSV output to OUTPUT_FILE.csv
-D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
-j JITTER, --jitter JITTER
Add a 0-N second randomized delay to the sleep value
-o OUTPUT, --output OUTPUT
Append console output to FILE
-p PORTS, --ports PORTS
Comma-separated list of TCP ports to use (default: "80,443")
-P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
-r, --randomize Randomize(ish) the order IPs/ports are tested
-s SLEEP, --sleep SLEEP
Add N seconds before thread completes
-t TIMEOUT, --timeout TIMEOUT
Wait N seconds for an unresponsive host
-T, --test Test-mode; don't send requests
-v, --verbose Enable verbose output
-x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

Examples

Localhost Testing

Launch python HTTP server

% python3 -u -m http.server 8001
Serving HTTP on :: port 8001 (http://[::]:8001/) ...

Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

% while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Listening on [::]:8443
Ncat: Listening on 0.0.0.0:8443

Also launch ncat as SSL on a port that will default to HTTP detection

% while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
Ncat: Listening on [::]:8444
Ncat: Listening on 0.0.0.0:8444

Prepare an FQDN file:

% cat ../test_localhost_fqdn.txt 
www.example.com
localhost.example.com
localhost.local
localhost
notreally.arealdomain

Prepare an IP file / list:

% cat ../test_localhost_ips.txt 
127.0.0.1
127.0.0.0/29
not_an_ip_addr
-6.a
=4.2
::1

Run the scan

  • Note an IPv6 network added to IPs on the CLI
  • -p to specify the ports we are listening on
  • -x for single threaded run to give our ncat servers time to restart
  • -s0.2 short sleep for our ncat servers to restart
  • -t1 to timeout after 1 second
% ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
2023-06-21 12:48:33 [INFO] * Version: 0.9.4
2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
2023-06-21 12:48:33 [INFO] * Thread(s): 1
2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
2023-06-21 12:48:33 [INFO] * Timeout: 1.0
2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
Prep Tests: 100%|█████████████████████████████████████████████████████████████████&# 9608;██████████████████████████████████████████████████████████████████████████████| 36/36 [00:29<00:00, 1.20it/s]
2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
2023-06-21 12:49:03 [INFO] Queuing 18 threads
++RCVD++ (200 OK) www.example.com @ :::8001
++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
++RCVD++ (202 OK) www.example.com @ :::8444
++RCVD++ (200 OK) www.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
++RCVD++ (202 OK) www.example.com @ ::1:8444
++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
++RCVD++ (200 OK) localhost.example.com @ :::8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
++RCVD+ + (202 OK) localhost.example.com @ :::8444
++RCVD++ (200 OK) localhost.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
++RCVD++ (202 OK) localhost.example.com @ ::1:8444
++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
Origin Scan: 100%|█████████████████████████████████████████████████████████████████████████████████████&#96 08;█████████████████████████████████████████████████████████| 18/18 [00:06<00:00, 2.76it/s]
2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
::1
::1:8444 => (202 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8001 => (200 / OK)

127.0.0.1
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)

::
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)

www.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)

localhost.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)


rst@r57 chaos %

Test & Verbose localhost

-T runs in test mode (do everything except send requests)

-v verbose option provides additional output


Known Defects

  • HTTP/HTTPS detection is not ideal
  • Need option to adjust CSV newline delimiter
  • Need options to adjust where long strings / many lines are truncated
  • Try to figure out why we marked requests v2.x as required ;)
  • Options for very-verbose / quiet
  • Stagger thread launch when we're using sleep / jitter
  • Search for meta-refresh in 200 responses
  • Content-Location header for 201s ?
  • Improve thread name generation so we have the right number of unique names
  • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
  • TBD?

Related Links

Disclaimers

  • Copyright (C) 2023 RST
  • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
  • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
  • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


Continuous Security Validation with Penetration Testing as a Service (PTaaS)

By: THN
Validate security continuously across your full stack with Pen Testing as a Service. In today's modern security operations center (SOC), it's a battle between the defenders and the cybercriminals. Both are using tools and expertise – however, the cybercriminals have the element of surprise on their side, and a host of tactics, techniques, and procedures (TTPs) that have evolved. These external

Meet the Brains Behind the Malware-Friendly AI Chat Service ‘WormGPT’

WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”

Image: SlashNext.com.

The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.

WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.

“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

WormGPT’s core developer and frontman “Last” promoting the service on HackForums. Image: SlashNext.

In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.

“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

SlashNext asked WormGPT to compose this BEC phishing email. Image: SlashNext.

A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.

“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.

Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.

Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.

That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.

AN OPEN BOOK

Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.

“You can ask me anything,” Morais said. “I’m an open book.”

Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.

“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of learning and answer accuracy.”

WormGPT isn’t the only rogue ChatGPT clone advertised as friendly to malware writers and cybercriminals. According to SlashNext, one unsettling trend on the cybercrime forums is evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT.

“These ‘jailbreaks’ are specialised prompts that are becoming increasingly common,” Kelley wrote. “They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.”

Morais said they have been using the GPT-J 6B model since the service was launched, although he declined to discuss the source of the LLMs that power WormGPT. But he said the data set that informs WormGPT is enormous.

“Anyone that tests wormgpt can see that it has no difference from any other uncensored AI or even chatgpt with jailbreaks,” Morais explained. “The game changer is that our dataset [library] is big.”

Morais said he began working on computers at age 13, and soon started exploring security vulnerabilities and the possibility of making a living by finding and reporting them to software vendors.

“My story began in 2013 with some greyhat activies, never anything blackhat tho, mostly bugbounty,” he said. “In 2015, my love for coding started, learning c# and more .net programming languages. In 2017 I’ve started using many hacking forums because I have had some problems home (in terms of money) so I had to help my parents with money… started selling a few products (not blackhat yet) and in 2019 I started turning blackhat. Until a few months ago I was still selling blackhat products but now with wormgpt I see a bright future and have decided to start my transition into whitehat again.”

WormGPT sells licenses via a dedicated channel on Telegram, and the channel recently lamented that media coverage of WormGPT so far has painted the service in an unfairly negative light.

“We are uncensored, not blackhat!” the WormGPT channel announced at the end of July. “From the beginning, the media has portrayed us as a malicious LLM (Language Model), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.”

It turns out, when you advertise an online service for doing bad things, people tend to show up with the intention of doing bad things with it. WormGPT’s front man Last seems to have acknowledged this at the service’s initial launch, which included the disclaimer, “We are not responsible if you use this tool for doing bad stuff.”

But lately, Morais said, WormGPT has been forced to add certain guardrails of its own.

“We have prohibited some subjects on WormGPT itself,” Morais said. “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat. In the last weeks we have been blocking some subjects from being discussed on WormGPT.”

Still, Last has continued to state on HackForums — and more recently on the far more serious cybercrime forum Exploit — that WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” (FUD) by virtually all of the major antivirus makers (AVs).

“You can easily buy WormGPT and ask it for a Rust malware script and it will 99% sure be FUD against most AVs,” Last told a forum denizen in late July.

Asked to list some of the legitimate or what he called “white hat” uses for WormGPT, Morais said his service offers reliable code, unlimited characters, and accurate, quick answers.

“We used WormGPT to fix some issues on our website related to possible sql problems and exploits,” he explained. “You can use WormGPT to create firewalls, manage iptables, analyze network, code blockers, math, anything.”

Morais said he wants WormGPT to become a positive influence on the security community, not a destructive one, and that he’s actively trying to steer the project in that direction. The original HackForums thread pimping WormGPT as a malware writer’s best friend has since been deleted, and the service is now advertised as “WormGPT – Best GPT Alternative Without Limits — Privacy Focused.”

“We have a few researchers using our wormgpt for whitehat stuff, that’s our main focus now, turning wormgpt into a good thing to [the] community,” he said.

It’s unclear yet whether Last’s customers share that view.

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

By: THN
Following the footsteps of WormGPT, threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed FraudGPT on various dark web marketplaces and Telegram channels. "This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan 

Four Ways To Use AI Responsibly

Are you skeptical about mainstream artificial intelligence? Or are you all in on AI and use it all day, every day?  

The emergence of AI in daily life is streamlining workdays, homework assignments, and for some, personal correspondences. To live in a time where we can access this amazing technology from the smartphones in our pockets is a privilege; however, overusing AI or using it irresponsibly could cause a chain reaction that not only affects you but your close circle and society beyond. 

Here are four tips to help you navigate and use AI responsibly. 

1. Always Double Check AI’s Work

Artificial intelligence certainly earns the “intelligence” part of its name, but that doesn’t mean it never makes mistakes. Make sure to proofread or review everything AI creates, be it written, visual, or audio content.  

For instance, if you’re seeking a realistic image or video, AI often adds extra fingers and distorts faces. Some of its creations can be downright nightmarish! Also, there’s a phenomenon known as an AI hallucination. This occurs when the AI doesn’t admit that it doesn’t know the answer to your question. Instead, it makes up information that is untrue and even fabricates fake sources to back up its claims. 

One AI hallucination landed a lawyer in big trouble in New York. The lawyer used ChatGPT to write a brief, but he didn’t double check the AI’s work. It turns out the majority of the brief was incorrect.1 

Whether you’re a blogger with thousands of readers or you ask AI to write a little blurb to share amongst your friends or coworkers, it is imperative to edit everything that an AI tool generates. Not doing so could start a rumor based on a completely false claim. 

2. Be Transparent

If you use AI to do more than gather a few rough ideas, you should cite the tool you used as a source. Passing off an AI’s work as your own could be considered cheating in the eyes of teachers, bosses, or critics.  

There’s a lot of debate about whether AI has a place in the art world. One artist entered an image to a photography contest that he secretly created with AI. When his submission won the contest, the photographer revealed AI’s role in the image and gave up his prize. The photographer intentionally kept AI out of the conversation to prove a point, but imagine if he kept the image’s origin to himself.2 Would that be fair? When other photographers had to wait for the perfect angle of sunlight or catch a fleeting moment in time, should an AI-generated image with manufactured lighting and static subjects be judged the same way? 

3. Share Thoughtfully

Even if you don’t personally use AI, you’re still likely to encounter it daily, whether you realize it or not. AI-generated content is popular on social media, like the deepfake video game battles between politicians.3 (A deepfake is a manipulation of a photo, video, or audio clip that depicts something that never happened.) The absurdity of this video series is likely to tip off the viewer to its playful intent, though it’s best practice to add a disclaimer to any deepfake. 

Some deepfake have a malicious intent on top of looking and sounding very realistic. Especially around election time, fake news reports are likely to swirl and discredit the candidates. A great rule of thumb is: If it seems too fantastical to be true, it likely isn’t. Sometimes all it takes is five minutes to guarantee the authenticity of a social media post, photo, video, or news report. Think critically about the authenticity of the report before sharing. Fake news reports spread quickly, and many are incendiary in nature. 

4. Opt for Authenticity

According to “McAfee’s Modern Love Research Report,” 26% of respondents said they would use AI to write a love note; however, 49% of people said that they’d feel hurt if their partner tasked a machine with writing a love note instead of writing one with their own human heart and soul. 

Today’s AI is not sentient. That means that even if the final output moved you to tears or to laugh out loud, the AI itself doesn’t truly understand the emotions behind what it creates. It’s simply using patterns to craft a reply to your prompt. Hiding or funneling your true feelings into a computer program could result in a shaky and secretive relationship. 

Plus, if everyone relied upon AI content generation tools like ChatGPT, Bard, and Copy.ai, then how can we trust any genuine display of emotion? What would the future of novels, poetry, and even Hollywood look like?  

Be Cautious Yet Confident 

Responsible AI is a term that governs the responsibilities programmers have to society to ensure they populate AI systems with bias-free and accurate data. OpenAI (the organization behind ChatGPT and DALL-E) vows to act in “the best interests of humanity.”4 From there, the everyday people who interact with AI must similarly act in the best interests of themselves and those around them to avoid unleashing the dangers of AI upon society.   

The capabilities of AI are vast, and the technology is getting more sophisticated by the day. To ensure that the human voice and creative spirit doesn’t permanently take on a robotic feel, it’s best to use AI in moderation and be open with others about how you use it. 

To give you additional peace of mind, McAfee+ can restore your online privacy and identity should you fall into an AI-assisted scam. With identity restoration experts and up to $2 million in identity theft coverage, you can feel better about navigating this new dimension in the online world.   

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2ARTnews, “Artist Wins Photography Contest After Submitting AI-Generated Image, Then Forfeits Prize 

3Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”   

4OpenAI, “OpenAI Charter 

The post Four Ways To Use AI Responsibly appeared first on McAfee Blog.

Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground

Discover stories about threat actors’ latest tactics, techniques, and procedures from Cybersixgill’s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT

10 Artificial Intelligence Buzzwords You Should Know

Artificial intelligence used to be reserved for the population’s most brilliant scientists and isolated in the world’s top laboratories. Now, AI is available to anyone with an internet connection. Tools like ChatGPT, Voice.ai, DALL-E, and others have brought AI into daily life, but sometimes the terms used to describe their capabilities and inner workings are anything but mainstream. 

Here are 10 common terms you’ll likely to hear in the same sentence as your favorite AI tool, on the nightly news, or by the water cooler. Keep this AI dictionary handy to stay informed about this popular (and sometimes controversial) topic. 

AI-generated Content 

AI-generated content is any piece of written, audio, or visual media that was created partially or completely by an artificial intelligence-powered tool. 

If someone uses AI to create something, it doesn’t automatically mean they cheated or irresponsibly cut corners. AI is often a great place to start when creating outlines, compiling thought-starters, or seeking a new way of looking at a problem.  

AI Hallucination 

When your question stumps an AI, it doesn’t always admit that it doesn’t know the answer. So, instead of not giving an answer, it’ll make one up that it thinks you want to hear. This made-up answer is known as an AI hallucination. 

One real-world case of a costly AI hallucination occurred in New York where a lawyer used ChatGPT to write a brief. The brief seemed complete and cited its sources, but it turns out that none of the sources existed.1 It was all a figment of the AI’s “imagination.”  

Black Box 

To understand the term black box, imagine the AI as a system of cogs, pulleys, and conveyer belts housed within a box. In a see-through box, you can see how the input is transformed into the final product; however, some AI are referred to as a black box. That means you don’t know how the AI arrived at its conclusions. The AI completely hides its reasoning process. A black box can be a problem if you’d like to doublecheck the AI’s work. 

Deepfake 

Deepfake is the manipulation of a photo, video, or audio clip to portray events that never happened. Often used for humorous social media skits and viral posts, unsavory characters are also leveraging deepfake to spread fake news reports or scam people.  

For example, people are inserting politicians into unflattering poses and photo backgrounds. Sometimes the deepfake is intended to get a laugh, but other times the deepfake creator intends to spark rumors that could lead to dissent or tarnish the reputation of the photo subject. One tip to spot a deepfake image is to look at the hands and faces of people in the background. Deepfakes often add or subtract fingers or distort facial expressions. 

AI-assisted audio impersonations – which are considered deepfakes – are also rising in believability. According to McAfee’s “Beware the Artificial Imposter” report, 25% of respondents globally said that a voice scam happened either to themselves or to someone they know. Seventy-seven percent of people who were targeted by a voice scam lost money as a result.  

Deep Learning 

The closer an AI’s thinking process is to the human brain, the more accurate the AI is likely to be. Deep learning involves training an AI to reason and recall information like a human, meaning that the machine can identify patterns and make predictions. 

Explainable AI 

Explainable AI – or white box – is the opposite of black box AI. An explainable AI model always shows its work and how it arrived at its conclusion. Explainable AI can boost your confidence in the final output because you can doublecheck what went into the answer. 

Generative AI 

Generative AI is the type of artificial intelligence that powers many of today’s mainstream AI tools, like ChatGPT, Bard, and Craiyon. Like a sponge, generative AI soaks up huge amounts of data and recalls it to inform every answer it creates. 

Machine Learning 

Machine learning is integral to AI, because it lets the AI learn and continually improve. Without explicit instructions to do so, machine learning within AI allows the AI to get smarter the more it’s used. 

Responsible AI 

People must not only use AI responsibly, but the people designing and programming AI must do so responsibly, too. Technologists must ensure that the data the AI depends on is accurate and free from bias. This diligence is necessary to confirm that the AI’s output is correct and without prejudice.  

Sentient 

Sentient is an adjective that means someone or some thing is aware of feelings, sensations, and emotions. In futuristic movies depicting AI, the characters’ world goes off the rails when the robots become sentient, or when they “feel” human-like emotions. While it makes for great Hollywood drama, today’s AI is not sentient. It doesn’t empathize or understand the true meanings of happiness, excitement, sadness, or fear. 

So, even if an AI composed a short story that is so beautiful it made you cry, the AI doesn’t know that what it created was touching. It was just fulfilling a prompt and used a pattern to determine which word to choose next.  

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

The post 10 Artificial Intelligence Buzzwords You Should Know appeared first on McAfee Blog.

What Is Generative AI and How Does It Work?

It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me. 

The most famous of these mainstream AI tools are ChatGPT, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work? 

Here’s a simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools. 

What Is Generative AI? 

Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Today’s Gen AI’s evolved from the use of chatbots in the 1960s. Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.  

How Does Generative AI Work? 

Think of generative AI as a sponge that desperately wants to delight the users who ask it questions. 

First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes worth of facts. The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.  

From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes. 

Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions. So, while the technology is certainly smart, it’s not exactly creative. 

How to Use Generative AI Responsibly 

The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double-check your work!  

One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist. This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.  

Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. However, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.   

The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgment and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content. 

Technology can also help protect against voice cloning attacks. Tools like McAfee Deepfake Detector, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses.

The post What Is Generative AI and How Does It Work? appeared first on McAfee Blog.

ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

By: Zion3R


ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


Prerequisites

  • Burp Suite
  • Jython Standalone Jar

Installation

Follow these steps to install the ReconAIzer extension on Burp Suite:

Step 1: Download Jython

  1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
  2. Save the Jython Standalone Jar file in a convenient location on your computer.

Step 2: Configure Jython in Burp Suite

  1. Open Burp Suite.
  2. Go to the "Extensions" tab.
  3. Click on the "Extensions settings" sub-tab.
  4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
  5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
  6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

Step 3: Download and Install ReconAIzer

  1. Download the latest release of ReconAIzer
  2. Open Burp Suite
  3. Go back to the "Extensions" tab in Burp Suite.
  4. Click the "Add" button.
  5. In the "Add extension" dialog, select "Python" as the "Extension type."
  6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
  7. Make sure the "Load" checkbox is selected and click the "Next" button.
  8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

Happy bug hunting!



Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Losing sleep over Generative-AI apps? You're not alone or wrong. According to the Astrix Security Research Group, mid size organizations already have, on average, 54 Generative-AI integrations to core systems like Slack, GitHub and Google Workspace and this number is only expected to grow. Continue reading to understand the potential risks and how to minimize them.  Book a Generative-AI

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of

New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT

The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations' sensitive data. But what do we really know about this risk? A new research by Browser Security company LayerX sheds light on the scope and nature of these risks. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and

The Future of Technology: AI, Deepfake, & Connected Devices

The dystopian 2020s, ’30s, and ’40s depicted in novels and movies written and produced decades ago blessedly seem very far off from the timeline of reality. Yes, we have refrigerators that suggest grocery lists, and we have devices in our pockets that control seemingly every function of our homes. But there aren’t giant robots roaming the streets bellowing orders.  

Humans are still very much in control of society. To keep it that way, we must use the latest technological advancements for their main intended purpose: To boost convenience in our busy lives. 

The future of technology is bright. With the right attitude, security tools, and a reminder every now and then to look up from our devices, humans will be able to enjoy everything the future of technology holds. 

Artificial Intelligence 

A look into the future of technology would be incomplete without touching on the applications and impact of artificial intelligence (AI) on everyday tasks. Platforms like ChatGPT , Voice.ai, and Craiyon have thrilled, shocked, and unnerved the world in equal measures. AI has already transformed work life, home life, and free time of everyday people everywhere.  

According to McAfee’s Modern Love Research Report, 26% of people would use AI to aid in composing a love note. Plus, more than two-thirds of those surveyed couldn’t tell the difference between a love note written by AI and a human. AI can be a good tool to generate ideas, but replacing genuine human emotion with the words of a computer program could create a shaky foundation for a relationship. 

The Center for AI Safety urges that humans must take an active role in using AI responsibly. Cybercriminals and unsavory online characters are already using it maliciously to gain financially and spread incendiary misinformation. For example, AI-generated voice imposters are scamming concerned family members and friends with heartfelt pleas for financial help with a voice that sounds just like their loved one. Voice scams are turning out to be fruitful for scammers: 77% of people polled who received a cloned voice scam lost money as a result. 

Even people who aren’t intending mischief can cause a considerable amount when they use AI to cut corners. One lawyer’s testimony went awry when his research partner, ChatGPT, when rogue and completely made up its justification.1 This phenomenon is known as an AI hallucination. It occurs when ChatGPT or other similar AI content generation tool doesn’t know the answer to your question, so it fabricates sources and asserts you that it’s giving you the truth.  

Overreliance on ChatGPT’s output and immediately trusting it as truth can lead to an internet rampant with fake news and false accounts. Keep in mind that using ChatGPT introduces risk in the content creation process. Use it responsibly. 

Deepfake 

Though it’s powered by AI and could fall under the AI section above, deepfake is exploding and deserves its own spotlight. Deepfake technology is the manipulation of videos to digitally transform one person’s appearance resemble someone else, usually a public figure. Deepfake videos are often accompanied by AI-altered voice tracks. Deepfake challenges the accuracy of the common saying, “Seeing is believing.” Now, it’s more difficult than ever to separate fact from fiction.   

Not all deepfake uses are nefarious. Deepfake could become a valuable tool in special effects and editing for the gaming and film industries. Additionally, fugitive sketch artists could leverage deepfake to create ultra-realistic portraits of wanted criminals. If you decide to use deepfake to add some flair to your social media feed or portfolio, make sure to add a disclaimer that you altered reality with the technology. 

Connected Devices 

Currently, it’s estimated that there are more than 15 billion connected devices in the world. A connected device is defined as anything that connects to the internet. In addition to smartphones, computers, and tablets, connected devices also extend to smart refrigerators, smart lightbulbs, smart TVs, virtual home assistants, smart thermostats, etc. By 2030, there may be as many as 29 billion connected devices.2 

The growing number of connected devices can be attributed to our desire for convenience. The ability to remote start your car on a frigid morning from the comfort of your home would’ve been a dream in the 1990s. Checking your refrigerator’s contents from the grocery store eliminates the need for a pesky second trip to pick up the items you forgot the first time around. 

The downfall of so many connected devices is that it presents crybercriminals literally billions of opportunities to steal people’s personally identifiable information. Each device is a window into your online life, so it’s essential to guard each device to keep cybercriminals away from your important personal details. 

What the Future of Technology Holds for You 

With the widespread adoption of email, then cellphones, and then social media in the ’80s, ’90s and early 2000s, respectively, people have integrated technology into their daily lives that better helps them connect with other people. More recent technological innovations seem to trend toward how to connect people to their other devices for a seamless digital life. 

We shouldn’t ignore that the more devices and online accounts we manage, the more opportunities cybercriminals have to weasel their way into your digital life and put your personally identifiable information at risk. To protect your online privacy, devices, and identity, entrust your digital safety to McAfee+. McAfee+ includes $1 million in identity theft coverage, virtual private network (VPN), Personal Data Cleanup, and more. 

The future isn’t a scary place. It’s a place of infinite technological possibilities! Explore them confidently with McAfee+ by your side. 

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2Statista, “Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030 

The post The Future of Technology: AI, Deepfake, & Connected Devices appeared first on McAfee Blog.

Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You

Anyone can try ChatGPT for free. Yet that hasn’t stopped scammers from trying to cash in on it.  

A rash of sketchy apps have cropped up in Apple’s App Store and Google Play. They pose as Chat GPT apps and try to fleece smartphone owners with phony subscriptions.  

Yet you can spot them quickly when you know what to look for. 

What is ChatGPT, and what are people doing with it? 

ChatGPT is an AI-driven chatbot service created by OpenAI. It lets you have uncannily human conversations with an AI that’s been programmed and fed with information over several generations of development. Provide it with an instruction or ask it a question, and the AI provides a detailed response. 

Unsurprisingly, it has millions of people clamoring to use it. All it takes is a single prompt, and the prompts range far and wide.  

People ask ChatGPT to help them write cover letters for job interviews, make travel recommendations, and explain complex scientific topics in plain language. One person highlighted how they used ChatGPT to run a tabletop game of Dungeons & Dragons for them. (If you’ve ever played, you know that’s a complex task that calls for a fair share of cleverness to keep the game entertaining.)  

That’s just a handful of examples. As for myself, I’ve been using ChatGPT in the kitchen. My family and I have been digging into all kinds of new recipes thanks to its AI. 

Sketchy ChatGPT apps in the App Store and Google Play 

So, where do the scammers come in? 

Scammers, have recently started posting copycat apps that look like they are powered by ChatGPT but aren’t. What’s more, they charge people a fee to use them—a prime example of fleeceware. OpenAI, the makers of ChatGPT, have just officially launched their iOS app for U.S. iPhone users and can be downloaded from the Apple App Store here. The official Android version is still yet to be released.  

Fleeceware mimics a pre-existing service that’s free or low-cost and then charges an excessive fee to use it. Basically, it’s a copycat. An expensive one at that.  

Fleeceware scammers often lure in their victims with “a free trial” that quickly converts into a subscription. However, with fleeceware, the terms of the subscription are steep. They might bill the user weekly, and at rates much higher than the going rate. 

The result is that the fleeceware app might cost the victim a few bucks before they can cancel it. Worse yet, the victim might forget about the app entirely and run up hundreds of dollars before they realize what’s happening. Again, all for a simple app that’s free or practically free elsewhere. 

What makes fleeceware so tricky to spot is that it can look legit at first glance. Plenty of smartphone apps offer subscriptions and other in-app purchases. In effect, fleeceware hides in plain sight among the thousands of other legitimate apps in the hopes you’ll download it. 

With that, any app that charges a fee to use ChatGPT is fleeceware. ChatGPT offers basic functionality that anyone can use for free.  

There is one case where you might pay a fee to use ChatGPT. It has its own subscription-level offering, ChatGPT Plus. With a subscription, ChatGPT responds more quickly to prompts and offers access during peak hours when free users might be shut out. That’s the one legitimate case where you might pay to use it. 

In all, more and more people want to take ChatGPT for a spin. However, they might not realize it’s free. Scammers bank on that, and so we’ve seen a glut of phony ChatGPT apps that aim to install fleeceware onto people’s phones. 

How do you keep fleeceware and other bad apps off your phone?  

Read the fine print. 

Read the description of the app and see what the developer is really offering. If the app charges you to use ChatGPT, it’s fleeceware. Anyone can use ChatGPT for free by setting up an account at its official website, https://chat.openai.com. 

Look at the reviews. 

Reviews can tell you quite a bit about an app. They can also tell you the company that created it handles customer feedback.  

In the case of fleeceware, you’ll likely see reviews that complain about sketchy payment terms. They might mention three-day trials that automatically convert to pricey monthly or weekly subscriptions. Moreover, they might describe how payment terms have changed and become more costly as a result.  

In the case of legitimate apps, billing issues can arise from time to time, so see how the company handles complaints. Companies in good standing will typically provide links to customer service where people can resolve any issues they have. Company responses that are vague, or a lack of responses at all, should raise a red flag. 

Be skeptical about overwhelmingly positive reviews. 

Scammers are smart. They’ll count on you to look at an overall good review of 4/5 stars or more and think that’s good enough. They know this, so they’ll pack their app landing page with dozens and dozens of phony and fawning reviews to make the app look legitimate. This tactic serves another purpose: it hides the true reviews written by actual users, which might be negative because the app is a scam. 

Filter the app’s reviews for the one-star reviews and see what concerns people have. Do they mention overly aggressive billing practices, like the wickedly high prices and weekly billing cycles mentioned above? That might be a sign of fleeceware. Again, see if the app developer responded to the concerns and note the quality of the response. A legitimate company will honestly want to help a frustrated user and provide clear next steps to resolve the issue. 

Steer clear of third-party app stores. 

Google Play does its part to keep its virtual shelves free of malware-laden apps with a thorough submission process, as reported by Google. It further keeps things safer through its App Defense Alliance that shares intelligence across a network of partners, of which we’re a proud member. Further, users also have the option of running Play Protect to check apps for safety before they’re downloaded. Apple’s App Store has its own rigorous submission process for submitting apps. Likewise, Apple deletes hundreds of thousands of malicious apps from its store each year. 

Third-party app stores might not have protections like these in place. Moreover, some of them might be fronts for illegal activity. Organized cybercrime organizations deliberately populate their third-party stores with apps that steal funds or personal information. Stick with the official app stores for the most complete protection possible.  

Cancel unwanted subscriptions from your phone. 

Many fleeceware apps deliberately make it tough to cancel them. You’ll often see complaints about that in reviews, “I don’t see where I can cancel my subscription!” Deleting the app from your phone is not enough. Your subscription will remain active unless you cancel your payment method.  

Luckily, your phone makes it easy to cancel subscriptions right from your settings menu. Canceling makes sure your credit or debit card won’t get charged when the next billing cycle comes up. 

Be wary. Many fleeceware apps have aggressive billing cycles. Sometimes weekly.  

The safest and best way to enjoy ChatGPT: Go directly to the source. 

ChatGPT is free. Anyone can use it by setting up a free account with OpenAI at https://chat.openai.com. Smartphone apps that charge you to use it are a scam. 

How to download the official ChatGPT app 

You can download the official app, currently on iOS from the App Store 

The post Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You appeared first on McAfee Blog.

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire

Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI's ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes against the backdrop of fake ChatGPT web browser extensions being increasingly used to steal users' Facebook account credentials with an aim to run

ChatGPT is Back in Italy After Addressing Data Privacy Concerns

OpenAI, the company behind ChatGPT, has officially made a return to Italy after the company met the data protection authority's demands ahead of April 30, 2023, deadline. The development was first reported by the Associated Press. OpenAI's CEO, Sam Altman, tweeted, "we're excited ChatGPT is available in [Italy] again!" The reinstatement comes following Garante's decision to temporarily block 

ChatGPT's Data Protection Blind Spots and How Security Teams Can Solve Them

In the short time since their inception, ChatGPT and other generative AI platforms have rightfully gained the reputation of ultimate productivity boosters. However, the very same technology that enables rapid production of high-quality text on demand, can at the same time expose sensitive corporate data. A recent incident, in which Samsung software engineers pasted proprietary code into ChatGPT,

ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in

OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users' personal information and chat titles in the upstart's ChatGPT service earlier this week. The glitch, which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users' conversations from the chat history sidebar, prompting the company to

Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Google has stepped in to remove a bogus Chrome browser extension from the official Web Store that masqueraded as OpenAI's ChatGPT service to harvest Facebook session cookies and hijack the accounts. The "ChatGPT For Google" extension, a trojanized version of a legitimate open source browser add-on, attracted over 9,000 installations since March 14, 2023, prior to its removal. It was originally

GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.

Requirements

  • Python 3.10
  • All the packages mentioned in the requirements.txt file
  • OpenAi api

Usage

  • First Change the "API__KEY" part of the code with OpenAI api key
openai.api_key = "__API__KEY" # Enter your API key
  • second install the packages
pip3 install -r requirements.txt
or
pip install -r requirements.txt
  • run the code python3 gpt_vuln.py <> or if windows run python gpt_vuln.py <>

Supported in both windows and linux

Understanding the code

Profiles:

Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.

The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.

vulnerability analysis of {} and return a vulnerabilty report in json".format(analize) # A structure for the request completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, ) response = completion.choices[0].text return response" dir="auto">
def profile(ip):
nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
json_data = nm.analyse_nmap_xml_scan()
analize = json_data["scan"]
# Prompt about what the quary is all about
prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
# A structure for the request
completion = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
)
response = completion.choices[0].text
return response

Advantages

  • Can be used in developing a more advanced systems completly made of the API and scanner combination
  • Can increase the effectiveness of the final system
  • Highly productive when working with models such as GPT3


Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising

A fake ChatGPT-branded Chrome browser extension has been found to come with capabilities to hijack Facebook accounts and create rogue admin accounts, highlighting one of the different methods cyber criminals are using to distribute malware. "By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus," Guardio
❌