FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late

Italian Data Protection Watchdog Accuses ChatGPT of Privacy Violations

Italy's data protection authority (DPA) has notified ChatGPT-maker OpenAI of supposedly violating privacy laws in the region. "The available evidence pointed to the existence of breaches of the provisions contained in the E.U. GDPR [General Data Protection Regulation]," the Garante per la protezione dei dati personali (aka the Garante) said in a statement on Monday. It also said it

There is a Ransomware Armageddon Coming for Us All

Generative AI will enable anyone to launch sophisticated phishing attacks that only Next-generation MFA devices can stop The least surprising headline from 2023 is that ransomware again set new records for a number of incidents and the damage inflicted. We saw new headlines every week, which included a who’s-who of big-name organizations. If MGM, Johnson Controls, Chlorox, Hanes Brands, Caesars

Vietnamese Hackers Using New Delphi-Powered Malware to Target Indian Marketers

The Vietnamese threat actors behind the Ducktail stealer malware have been linked to a new campaign that ran between March and early October 2023, targeting marketing professionals in India with an aim to hijack Facebook business accounts. "An important feature that sets it apart is that, unlike previous campaigns, which relied on .NET applications, this one used Delphi as the programming

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to bolster AI safety and security. "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or

How to Guard Your Data from Exposure in ChatGPT

ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are

"I Had a Dream" and Generative AI Jailbreaks

"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

By: THN
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an 

How to Prevent ChatGPT From Stealing Your Content & Traffic

ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging

Chaos - Origin IP Scanning Utility Developed With ChatGPT

By: Zion3R


chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.

An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.

chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.

usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x] 
_..._
.-'` `'-.
__|___________|__
\ /
`._ CHAOS _.'
`-------`
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/_____________________\\
CHAtgpt Origin-ip Scanner
_______ _______ _______ _______ _______
|\\ /|\\ /|\\ /|\\ /|\\/|
| +---+ | +---+ | +---+ | +---+ | +---+ |
| |H | | |U | | |M | | |A | | |N | |
| |U | | |S | | |A | | |N | | |C | |
| |M | | |E | | |N | | |D | | |O | |
| |A | | |R | | |C | | | | | |L | |
| +---+ | +---+ | +---+ | +---+ | +---+ |
|/_____|\\_____|\\_____|\\_____|\\_____\\

Origin IP Scanner developed with ChatGPT
cha*os (n): complete disorder and confusion
(ver: 0.9.4)


Features

  • Threaded for performance gains
  • Real-time status updates and progress bars, nice for large scans ;)
  • Flexible user options for various scenarios & constraints
  • Dataset reduction for improved scan times
  • Easy to use CSV output

Installation

  1. Download / clone / unzip / whatever
  2. cd path/to/chaos
  3. pip3 install -U pip setuptools virtualenv
  4. virtualenv env
  5. source env/bin/activate
  6. (env) pip3 install -U -r ./requirements.txt
  7. (env) ./chaos.py -h

Options

-h, --help            show this help message and exit
-f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
-i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
-a AGENT, --agent AGENT
User-Agent header value for requests
-C, --csv Append CSV output to OUTPUT_FILE.csv
-D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
-j JITTER, --jitter JITTER
Add a 0-N second randomized delay to the sleep value
-o OUTPUT, --output OUTPUT
Append console output to FILE
-p PORTS, --ports PORTS
Comma-separated list of TCP ports to use (default: "80,443")
-P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
-r, --randomize Randomize(ish) the order IPs/ports are tested
-s SLEEP, --sleep SLEEP
Add N seconds before thread completes
-t TIMEOUT, --timeout TIMEOUT
Wait N seconds for an unresponsive host
-T, --test Test-mode; don't send requests
-v, --verbose Enable verbose output
-x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2

Examples

Localhost Testing

Launch python HTTP server

% python3 -u -m http.server 8001
Serving HTTP on :: port 8001 (http://[::]:8001/) ...

Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang

% while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Listening on [::]:8443
Ncat: Listening on 0.0.0.0:8443

Also launch ncat as SSL on a port that will default to HTTP detection

% while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done    
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
Ncat: Listening on [::]:8444
Ncat: Listening on 0.0.0.0:8444

Prepare an FQDN file:

% cat ../test_localhost_fqdn.txt 
www.example.com
localhost.example.com
localhost.local
localhost
notreally.arealdomain

Prepare an IP file / list:

% cat ../test_localhost_ips.txt 
127.0.0.1
127.0.0.0/29
not_an_ip_addr
-6.a
=4.2
::1

Run the scan

  • Note an IPv6 network added to IPs on the CLI
  • -p to specify the ports we are listening on
  • -x for single threaded run to give our ncat servers time to restart
  • -s0.2 short sleep for our ncat servers to restart
  • -t1 to timeout after 1 second
% ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1   
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
2023-06-21 12:48:33 [INFO] * Version: 0.9.4
2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
2023-06-21 12:48:33 [INFO] * Thread(s): 1
2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
2023-06-21 12:48:33 [INFO] * Timeout: 1.0
2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
Prep Tests: 100%|█████████████████████████████████████████████████████████████████&# 9608;██████████████████████████████████████████████████████████████████████████████| 36/36 [00:29<00:00, 1.20it/s]
2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
2023-06-21 12:49:03 [INFO] Queuing 18 threads
++RCVD++ (200 OK) www.example.com @ :::8001
++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
++RCVD++ (202 OK) www.example.com @ :::8444
++RCVD++ (200 OK) www.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
++RCVD++ (202 OK) www.example.com @ ::1:8444
++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
++RCVD++ (200 OK) localhost.example.com @ :::8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
++RCVD+ + (202 OK) localhost.example.com @ :::8444
++RCVD++ (200 OK) localhost.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
++RCVD++ (202 OK) localhost.example.com @ ::1:8444
++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
Origin Scan: 100%|█████████████████████████████████████████████████████████████████████████████████████&#96 08;█████████████████████████████████████████████████████████| 18/18 [00:06<00:00, 2.76it/s]
2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
::1
::1:8444 => (202 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8001 => (200 / OK)

127.0.0.1
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)

::
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)

www.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)

localhost.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)


rst@r57 chaos %

Test & Verbose localhost

-T runs in test mode (do everything except send requests)

-v verbose option provides additional output


Known Defects

  • HTTP/HTTPS detection is not ideal
  • Need option to adjust CSV newline delimiter
  • Need options to adjust where long strings / many lines are truncated
  • Try to figure out why we marked requests v2.x as required ;)
  • Options for very-verbose / quiet
  • Stagger thread launch when we're using sleep / jitter
  • Search for meta-refresh in 200 responses
  • Content-Location header for 201s ?
  • Improve thread name generation so we have the right number of unique names
  • Sanity check on IPv6 netmasks to prevent scans that outlive the sun?
  • TBD?

Related Links

Disclaimers

  • Copyright (C) 2023 RST
  • This software is distributed on an "AS IS" basis, without express or implied warranties of any kind
  • This software is intended for research and/or authorized testing; it is your responsibility to ensure you are authorized to use this software in any way
  • By using this software you acknowledge that you are responsible for your actions and assume all liability for any direct, indirect, or other damages


Continuous Security Validation with Penetration Testing as a Service (PTaaS)

By: THN
Validate security continuously across your full stack with Pen Testing as a Service. In today's modern security operations center (SOC), it's a battle between the defenders and the cybercriminals. Both are using tools and expertise – however, the cybercriminals have the element of surprise on their side, and a host of tactics, techniques, and procedures (TTPs) that have evolved. These external

Meet the Brains Behind the Malware-Friendly AI Chat Service ‘WormGPT’

WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”

Image: SlashNext.com.

The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.

WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.

“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

WormGPT’s core developer and frontman “Last” promoting the service on HackForums. Image: SlashNext.

In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.

“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

SlashNext asked WormGPT to compose this BEC phishing email. Image: SlashNext.

A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.

“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.

Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.

Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.

That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.

AN OPEN BOOK

Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.

“You can ask me anything,” Morais said. “I’m an open book.”

Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.

“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of learning and answer accuracy.”

WormGPT isn’t the only rogue ChatGPT clone advertised as friendly to malware writers and cybercriminals. According to SlashNext, one unsettling trend on the cybercrime forums is evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT.

“These ‘jailbreaks’ are specialised prompts that are becoming increasingly common,” Kelley wrote. “They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.”

Morais said they have been using the GPT-J 6B model since the service was launched, although he declined to discuss the source of the LLMs that power WormGPT. But he said the data set that informs WormGPT is enormous.

“Anyone that tests wormgpt can see that it has no difference from any other uncensored AI or even chatgpt with jailbreaks,” Morais explained. “The game changer is that our dataset [library] is big.”

Morais said he began working on computers at age 13, and soon started exploring security vulnerabilities and the possibility of making a living by finding and reporting them to software vendors.

“My story began in 2013 with some greyhat activies, never anything blackhat tho, mostly bugbounty,” he said. “In 2015, my love for coding started, learning c# and more .net programming languages. In 2017 I’ve started using many hacking forums because I have had some problems home (in terms of money) so I had to help my parents with money… started selling a few products (not blackhat yet) and in 2019 I started turning blackhat. Until a few months ago I was still selling blackhat products but now with wormgpt I see a bright future and have decided to start my transition into whitehat again.”

WormGPT sells licenses via a dedicated channel on Telegram, and the channel recently lamented that media coverage of WormGPT so far has painted the service in an unfairly negative light.

“We are uncensored, not blackhat!” the WormGPT channel announced at the end of July. “From the beginning, the media has portrayed us as a malicious LLM (Language Model), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.”

It turns out, when you advertise an online service for doing bad things, people tend to show up with the intention of doing bad things with it. WormGPT’s front man Last seems to have acknowledged this at the service’s initial launch, which included the disclaimer, “We are not responsible if you use this tool for doing bad stuff.”

But lately, Morais said, WormGPT has been forced to add certain guardrails of its own.

“We have prohibited some subjects on WormGPT itself,” Morais said. “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat. In the last weeks we have been blocking some subjects from being discussed on WormGPT.”

Still, Last has continued to state on HackForums — and more recently on the far more serious cybercrime forum Exploit — that WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” (FUD) by virtually all of the major antivirus makers (AVs).

“You can easily buy WormGPT and ask it for a Rust malware script and it will 99% sure be FUD against most AVs,” Last told a forum denizen in late July.

Asked to list some of the legitimate or what he called “white hat” uses for WormGPT, Morais said his service offers reliable code, unlimited characters, and accurate, quick answers.

“We used WormGPT to fix some issues on our website related to possible sql problems and exploits,” he explained. “You can use WormGPT to create firewalls, manage iptables, analyze network, code blockers, math, anything.”

Morais said he wants WormGPT to become a positive influence on the security community, not a destructive one, and that he’s actively trying to steer the project in that direction. The original HackForums thread pimping WormGPT as a malware writer’s best friend has since been deleted, and the service is now advertised as “WormGPT – Best GPT Alternative Without Limits — Privacy Focused.”

“We have a few researchers using our wormgpt for whitehat stuff, that’s our main focus now, turning wormgpt into a good thing to [the] community,” he said.

It’s unclear yet whether Last’s customers share that view.

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

By: THN
Following the footsteps of WormGPT, threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed FraudGPT on various dark web marketplaces and Telegram channels. "This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan 

Four Ways To Use AI Responsibly

Are you skeptical about mainstream artificial intelligence? Or are you all in on AI and use it all day, every day?  

The emergence of AI in daily life is streamlining workdays, homework assignments, and for some, personal correspondences. To live in a time where we can access this amazing technology from the smartphones in our pockets is a privilege; however, overusing AI or using it irresponsibly could cause a chain reaction that not only affects you but your close circle and society beyond. 

Here are four tips to help you navigate and use AI responsibly. 

1. Always Double Check AI’s Work

Artificial intelligence certainly earns the “intelligence” part of its name, but that doesn’t mean it never makes mistakes. Make sure to proofread or review everything AI creates, be it written, visual, or audio content.  

For instance, if you’re seeking a realistic image or video, AI often adds extra fingers and distorts faces. Some of its creations can be downright nightmarish! Also, there’s a phenomenon known as an AI hallucination. This occurs when the AI doesn’t admit that it doesn’t know the answer to your question. Instead, it makes up information that is untrue and even fabricates fake sources to back up its claims. 

One AI hallucination landed a lawyer in big trouble in New York. The lawyer used ChatGPT to write a brief, but he didn’t double check the AI’s work. It turns out the majority of the brief was incorrect.1 

Whether you’re a blogger with thousands of readers or you ask AI to write a little blurb to share amongst your friends or coworkers, it is imperative to edit everything that an AI tool generates. Not doing so could start a rumor based on a completely false claim. 

2. Be Transparent

If you use AI to do more than gather a few rough ideas, you should cite the tool you used as a source. Passing off an AI’s work as your own could be considered cheating in the eyes of teachers, bosses, or critics.  

There’s a lot of debate about whether AI has a place in the art world. One artist entered an image to a photography contest that he secretly created with AI. When his submission won the contest, the photographer revealed AI’s role in the image and gave up his prize. The photographer intentionally kept AI out of the conversation to prove a point, but imagine if he kept the image’s origin to himself.2 Would that be fair? When other photographers had to wait for the perfect angle of sunlight or catch a fleeting moment in time, should an AI-generated image with manufactured lighting and static subjects be judged the same way? 

3. Share Thoughtfully

Even if you don’t personally use AI, you’re still likely to encounter it daily, whether you realize it or not. AI-generated content is popular on social media, like the deepfake video game battles between politicians.3 (A deepfake is a manipulation of a photo, video, or audio clip that depicts something that never happened.) The absurdity of this video series is likely to tip off the viewer to its playful intent, though it’s best practice to add a disclaimer to any deepfake. 

Some deepfake have a malicious intent on top of looking and sounding very realistic. Especially around election time, fake news reports are likely to swirl and discredit the candidates. A great rule of thumb is: If it seems too fantastical to be true, it likely isn’t. Sometimes all it takes is five minutes to guarantee the authenticity of a social media post, photo, video, or news report. Think critically about the authenticity of the report before sharing. Fake news reports spread quickly, and many are incendiary in nature. 

4. Opt for Authenticity

According to “McAfee’s Modern Love Research Report,” 26% of respondents said they would use AI to write a love note; however, 49% of people said that they’d feel hurt if their partner tasked a machine with writing a love note instead of writing one with their own human heart and soul. 

Today’s AI is not sentient. That means that even if the final output moved you to tears or to laugh out loud, the AI itself doesn’t truly understand the emotions behind what it creates. It’s simply using patterns to craft a reply to your prompt. Hiding or funneling your true feelings into a computer program could result in a shaky and secretive relationship. 

Plus, if everyone relied upon AI content generation tools like ChatGPT, Bard, and Copy.ai, then how can we trust any genuine display of emotion? What would the future of novels, poetry, and even Hollywood look like?  

Be Cautious Yet Confident 

Responsible AI is a term that governs the responsibilities programmers have to society to ensure they populate AI systems with bias-free and accurate data. OpenAI (the organization behind ChatGPT and DALL-E) vows to act in “the best interests of humanity.”4 From there, the everyday people who interact with AI must similarly act in the best interests of themselves and those around them to avoid unleashing the dangers of AI upon society.   

The capabilities of AI are vast, and the technology is getting more sophisticated by the day. To ensure that the human voice and creative spirit doesn’t permanently take on a robotic feel, it’s best to use AI in moderation and be open with others about how you use it. 

To give you additional peace of mind, McAfee+ can restore your online privacy and identity should you fall into an AI-assisted scam. With identity restoration experts and up to $2 million in identity theft coverage, you can feel better about navigating this new dimension in the online world.   

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2ARTnews, “Artist Wins Photography Contest After Submitting AI-Generated Image, Then Forfeits Prize 

3Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”   

4OpenAI, “OpenAI Charter 

The post Four Ways To Use AI Responsibly appeared first on McAfee Blog.

Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground

Discover stories about threat actors’ latest tactics, techniques, and procedures from Cybersixgill’s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT

10 Artificial Intelligence Buzzwords You Should Know

Artificial intelligence used to be reserved for the population’s most brilliant scientists and isolated in the world’s top laboratories. Now, AI is available to anyone with an internet connection. Tools like ChatGPT, Voice.ai, DALL-E, and others have brought AI into daily life, but sometimes the terms used to describe their capabilities and inner workings are anything but mainstream. 

Here are 10 common terms you’ll likely to hear in the same sentence as your favorite AI tool, on the nightly news, or by the water cooler. Keep this AI dictionary handy to stay informed about this popular (and sometimes controversial) topic. 

AI-generated Content 

AI-generated content is any piece of written, audio, or visual media that was created partially or completely by an artificial intelligence-powered tool. 

If someone uses AI to create something, it doesn’t automatically mean they cheated or irresponsibly cut corners. AI is often a great place to start when creating outlines, compiling thought-starters, or seeking a new way of looking at a problem.  

AI Hallucination 

When your question stumps an AI, it doesn’t always admit that it doesn’t know the answer. So, instead of not giving an answer, it’ll make one up that it thinks you want to hear. This made-up answer is known as an AI hallucination. 

One real-world case of a costly AI hallucination occurred in New York where a lawyer used ChatGPT to write a brief. The brief seemed complete and cited its sources, but it turns out that none of the sources existed.1 It was all a figment of the AI’s “imagination.”  

Black Box 

To understand the term black box, imagine the AI as a system of cogs, pulleys, and conveyer belts housed within a box. In a see-through box, you can see how the input is transformed into the final product; however, some AI are referred to as a black box. That means you don’t know how the AI arrived at its conclusions. The AI completely hides its reasoning process. A black box can be a problem if you’d like to doublecheck the AI’s work. 

Deepfake 

Deepfake is the manipulation of a photo, video, or audio clip to portray events that never happened. Often used for humorous social media skits and viral posts, unsavory characters are also leveraging deepfake to spread fake news reports or scam people.  

For example, people are inserting politicians into unflattering poses and photo backgrounds. Sometimes the deepfake is intended to get a laugh, but other times the deepfake creator intends to spark rumors that could lead to dissent or tarnish the reputation of the photo subject. One tip to spot a deepfake image is to look at the hands and faces of people in the background. Deepfakes often add or subtract fingers or distort facial expressions. 

AI-assisted audio impersonations – which are considered deepfakes – are also rising in believability. According to McAfee’s “Beware the Artificial Imposter” report, 25% of respondents globally said that a voice scam happened either to themselves or to someone they know. Seventy-seven percent of people who were targeted by a voice scam lost money as a result.  

Deep Learning 

The closer an AI’s thinking process is to the human brain, the more accurate the AI is likely to be. Deep learning involves training an AI to reason and recall information like a human, meaning that the machine can identify patterns and make predictions. 

Explainable AI 

Explainable AI – or white box – is the opposite of black box AI. An explainable AI model always shows its work and how it arrived at its conclusion. Explainable AI can boost your confidence in the final output because you can doublecheck what went into the answer. 

Generative AI 

Generative AI is the type of artificial intelligence that powers many of today’s mainstream AI tools, like ChatGPT, Bard, and Craiyon. Like a sponge, generative AI soaks up huge amounts of data and recalls it to inform every answer it creates. 

Machine Learning 

Machine learning is integral to AI, because it lets the AI learn and continually improve. Without explicit instructions to do so, machine learning within AI allows the AI to get smarter the more it’s used. 

Responsible AI 

People must not only use AI responsibly, but the people designing and programming AI must do so responsibly, too. Technologists must ensure that the data the AI depends on is accurate and free from bias. This diligence is necessary to confirm that the AI’s output is correct and without prejudice.  

Sentient 

Sentient is an adjective that means someone or some thing is aware of feelings, sensations, and emotions. In futuristic movies depicting AI, the characters’ world goes off the rails when the robots become sentient, or when they “feel” human-like emotions. While it makes for great Hollywood drama, today’s AI is not sentient. It doesn’t empathize or understand the true meanings of happiness, excitement, sadness, or fear. 

So, even if an AI composed a short story that is so beautiful it made you cry, the AI doesn’t know that what it created was touching. It was just fulfilling a prompt and used a pattern to determine which word to choose next.  

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

The post 10 Artificial Intelligence Buzzwords You Should Know appeared first on McAfee Blog.

What Is Generative AI and How Does It Work?

It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me. 

The most famous of these mainstream AI tools are ChatGPT, Voice.ai, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work? 

Here’s the simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools. 

What Is Generative AI? 

Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Today’s Gen AI’s evolved from the use of chatbots in the 1960s. 1 Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.  

How Does Generative AI Work? 

Think of generative AI as a sponge that desperately wants to delight the users who ask it questions. 

First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes’ worth of facts through the year 2021.2 The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.  

From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes. 

Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions.3 So, while the technology is certainly smart, it’s not exactly creative. 

How to Use Generative AI Responsibly 

The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double check your work!  

One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist.4 This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.  

Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. Though, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.   

The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgement and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content. 

1. TechTarget, “What is generative AI? Everything you need to know 

2. BBC Science Focus, “ChatGPT: Everything you need to know about OpenAI’s GPT-4 tool”  

3. 60 Minutes, “Artificial Intelligence Revolution 

4. The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

The post What Is Generative AI and How Does It Work? appeared first on McAfee Blog.

ReconAIzer - A Burp Suite Extension To Add OpenAI (GPT) On Burp And Help You With Your Bug Bounty Recon To Discover Endpoints, Params, URLs, Subdomains And More!

By: Zion3R


ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.

Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:


Prerequisites

  • Burp Suite
  • Jython Standalone Jar

Installation

Follow these steps to install the ReconAIzer extension on Burp Suite:

Step 1: Download Jython

  1. Download the latest Jython Standalone Jar from the official website: https://www.jython.org/download
  2. Save the Jython Standalone Jar file in a convenient location on your computer.

Step 2: Configure Jython in Burp Suite

  1. Open Burp Suite.
  2. Go to the "Extensions" tab.
  3. Click on the "Extensions settings" sub-tab.
  4. Under "Python Environment," click on the "Select file..." button next to "Location of the Jython standalone JAR file."
  5. Browse to the location where you saved the Jython Standalone Jar file in Step 1 and select it.
  6. Wait for the "Python Environment" status to change to "Jython (version x.x.x) successfully loaded," where x.x.x represents the Jython version.

Step 3: Download and Install ReconAIzer

  1. Download the latest release of ReconAIzer
  2. Open Burp Suite
  3. Go back to the "Extensions" tab in Burp Suite.
  4. Click the "Add" button.
  5. In the "Add extension" dialog, select "Python" as the "Extension type."
  6. Click on the "Select file..." button next to "Extension file" and browse to the location where you saved the ReconAIzer.py file in Step 3.1. Select the file and click "Open."
  7. Make sure the "Load" checkbox is selected and click the "Next" button.
  8. Wait for the extension to be loaded. You should see a message in the "Output" section stating that the ReconAIzer extension has been successfully loaded.

Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.

Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.

Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!

Happy bug hunting!



Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Losing sleep over Generative-AI apps? You're not alone or wrong. According to the Astrix Security Research Group, mid size organizations already have, on average, 54 Generative-AI integrations to core systems like Slack, GitHub and Google Workspace and this number is only expected to grow. Continue reading to understand the potential risks and how to minimize them.  Book a Generative-AI

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of

New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT

The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations' sensitive data. But what do we really know about this risk? A new research by Browser Security company LayerX sheds light on the scope and nature of these risks. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and

The Future of Technology: AI, Deepfake, & Connected Devices

The dystopian 2020s, ’30s, and ’40s depicted in novels and movies written and produced decades ago blessedly seem very far off from the timeline of reality. Yes, we have refrigerators that suggest grocery lists, and we have devices in our pockets that control seemingly every function of our homes. But there aren’t giant robots roaming the streets bellowing orders.  

Humans are still very much in control of society. To keep it that way, we must use the latest technological advancements for their main intended purpose: To boost convenience in our busy lives. 

The future of technology is bright. With the right attitude, security tools, and a reminder every now and then to look up from our devices, humans will be able to enjoy everything the future of technology holds. 

Artificial Intelligence 

A look into the future of technology would be incomplete without touching on the applications and impact of artificial intelligence (AI) on everyday tasks. Platforms like ChatGPT , Voice.ai, and Craiyon have thrilled, shocked, and unnerved the world in equal measures. AI has already transformed work life, home life, and free time of everyday people everywhere.  

According to McAfee’s Modern Love Research Report, 26% of people would use AI to aid in composing a love note. Plus, more than two-thirds of those surveyed couldn’t tell the difference between a love note written by AI and a human. AI can be a good tool to generate ideas, but replacing genuine human emotion with the words of a computer program could create a shaky foundation for a relationship. 

The Center for AI Safety urges that humans must take an active role in using AI responsibly. Cybercriminals and unsavory online characters are already using it maliciously to gain financially and spread incendiary misinformation. For example, AI-generated voice imposters are scamming concerned family members and friends with heartfelt pleas for financial help with a voice that sounds just like their loved one. Voice scams are turning out to be fruitful for scammers: 77% of people polled who received a cloned voice scam lost money as a result. 

Even people who aren’t intending mischief can cause a considerable amount when they use AI to cut corners. One lawyer’s testimony went awry when his research partner, ChatGPT, when rogue and completely made up its justification.1 This phenomenon is known as an AI hallucination. It occurs when ChatGPT or other similar AI content generation tool doesn’t know the answer to your question, so it fabricates sources and asserts you that it’s giving you the truth.  

Overreliance on ChatGPT’s output and immediately trusting it as truth can lead to an internet rampant with fake news and false accounts. Keep in mind that using ChatGPT introduces risk in the content creation process. Use it responsibly. 

Deepfake 

Though it’s powered by AI and could fall under the AI section above, deepfake is exploding and deserves its own spotlight. Deepfake technology is the manipulation of videos to digitally transform one person’s appearance resemble someone else, usually a public figure. Deepfake videos are often accompanied by AI-altered voice tracks. Deepfake challenges the accuracy of the common saying, “Seeing is believing.” Now, it’s more difficult than ever to separate fact from fiction.   

Not all deepfake uses are nefarious. Deepfake could become a valuable tool in special effects and editing for the gaming and film industries. Additionally, fugitive sketch artists could leverage deepfake to create ultra-realistic portraits of wanted criminals. If you decide to use deepfake to add some flair to your social media feed or portfolio, make sure to add a disclaimer that you altered reality with the technology. 

Connected Devices 

Currently, it’s estimated that there are more than 15 billion connected devices in the world. A connected device is defined as anything that connects to the internet. In addition to smartphones, computers, and tablets, connected devices also extend to smart refrigerators, smart lightbulbs, smart TVs, virtual home assistants, smart thermostats, etc. By 2030, there may be as many as 29 billion connected devices.2 

The growing number of connected devices can be attributed to our desire for convenience. The ability to remote start your car on a frigid morning from the comfort of your home would’ve been a dream in the 1990s. Checking your refrigerator’s contents from the grocery store eliminates the need for a pesky second trip to pick up the items you forgot the first time around. 

The downfall of so many connected devices is that it presents crybercriminals literally billions of opportunities to steal people’s personally identifiable information. Each device is a window into your online life, so it’s essential to guard each device to keep cybercriminals away from your important personal details. 

What the Future of Technology Holds for You 

With the widespread adoption of email, then cellphones, and then social media in the ’80s, ’90s and early 2000s, respectively, people have integrated technology into their daily lives that better helps them connect with other people. More recent technological innovations seem to trend toward how to connect people to their other devices for a seamless digital life. 

We shouldn’t ignore that the more devices and online accounts we manage, the more opportunities cybercriminals have to weasel their way into your digital life and put your personally identifiable information at risk. To protect your online privacy, devices, and identity, entrust your digital safety to McAfee+. McAfee+ includes $1 million in identity theft coverage, virtual private network (VPN), Personal Data Cleanup, and more. 

The future isn’t a scary place. It’s a place of infinite technological possibilities! Explore them confidently with McAfee+ by your side. 

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2Statista, “Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030 

The post The Future of Technology: AI, Deepfake, & Connected Devices appeared first on McAfee Blog.

Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You

Anyone can try ChatGPT for free. Yet that hasn’t stopped scammers from trying to cash in on it.  

A rash of sketchy apps have cropped up in Apple’s App Store and Google Play. They pose as Chat GPT apps and try to fleece smartphone owners with phony subscriptions.  

Yet you can spot them quickly when you know what to look for. 

What is ChatGPT, and what are people doing with it? 

ChatGPT is an AI-driven chatbot service created by OpenAI. It lets you have uncannily human conversations with an AI that’s been programmed and fed with information over several generations of development. Provide it with an instruction or ask it a question, and the AI provides a detailed response. 

Unsurprisingly, it has millions of people clamoring to use it. All it takes is a single prompt, and the prompts range far and wide.  

People ask ChatGPT to help them write cover letters for job interviews, make travel recommendations, and explain complex scientific topics in plain language. One person highlighted how they used ChatGPT to run a tabletop game of Dungeons & Dragons for them. (If you’ve ever played, you know that’s a complex task that calls for a fair share of cleverness to keep the game entertaining.)  

That’s just a handful of examples. As for myself, I’ve been using ChatGPT in the kitchen. My family and I have been digging into all kinds of new recipes thanks to its AI. 

Sketchy ChatGPT apps in the App Store and Google Play 

So, where do the scammers come in? 

Scammers, have recently started posting copycat apps that look like they are powered by ChatGPT but aren’t. What’s more, they charge people a fee to use them—a prime example of fleeceware. OpenAI, the makers of ChatGPT, have just officially launched their iOS app for U.S. iPhone users and can be downloaded from the Apple App Store here. The official Android version is still yet to be released.  

Fleeceware mimics a pre-existing service that’s free or low-cost and then charges an excessive fee to use it. Basically, it’s a copycat. An expensive one at that.  

Fleeceware scammers often lure in their victims with “a free trial” that quickly converts into a subscription. However, with fleeceware, the terms of the subscription are steep. They might bill the user weekly, and at rates much higher than the going rate. 

The result is that the fleeceware app might cost the victim a few bucks before they can cancel it. Worse yet, the victim might forget about the app entirely and run up hundreds of dollars before they realize what’s happening. Again, all for a simple app that’s free or practically free elsewhere. 

What makes fleeceware so tricky to spot is that it can look legit at first glance. Plenty of smartphone apps offer subscriptions and other in-app purchases. In effect, fleeceware hides in plain sight among the thousands of other legitimate apps in the hopes you’ll download it. 

With that, any app that charges a fee to use ChatGPT is fleeceware. ChatGPT offers basic functionality that anyone can use for free.  

There is one case where you might pay a fee to use ChatGPT. It has its own subscription-level offering, ChatGPT Plus. With a subscription, ChatGPT responds more quickly to prompts and offers access during peak hours when free users might be shut out. That’s the one legitimate case where you might pay to use it. 

In all, more and more people want to take ChatGPT for a spin. However, they might not realize it’s free. Scammers bank on that, and so we’ve seen a glut of phony ChatGPT apps that aim to install fleeceware onto people’s phones. 

How do you keep fleeceware and other bad apps off your phone?  

Read the fine print. 

Read the description of the app and see what the developer is really offering. If the app charges you to use ChatGPT, it’s fleeceware. Anyone can use ChatGPT for free by setting up an account at its official website, https://chat.openai.com. 

Look at the reviews. 

Reviews can tell you quite a bit about an app. They can also tell you the company that created it handles customer feedback.  

In the case of fleeceware, you’ll likely see reviews that complain about sketchy payment terms. They might mention three-day trials that automatically convert to pricey monthly or weekly subscriptions. Moreover, they might describe how payment terms have changed and become more costly as a result.  

In the case of legitimate apps, billing issues can arise from time to time, so see how the company handles complaints. Companies in good standing will typically provide links to customer service where people can resolve any issues they have. Company responses that are vague, or a lack of responses at all, should raise a red flag. 

Be skeptical about overwhelmingly positive reviews. 

Scammers are smart. They’ll count on you to look at an overall good review of 4/5 stars or more and think that’s good enough. They know this, so they’ll pack their app landing page with dozens and dozens of phony and fawning reviews to make the app look legitimate. This tactic serves another purpose: it hides the true reviews written by actual users, which might be negative because the app is a scam. 

Filter the app’s reviews for the one-star reviews and see what concerns people have. Do they mention overly aggressive billing practices, like the wickedly high prices and weekly billing cycles mentioned above? That might be a sign of fleeceware. Again, see if the app developer responded to the concerns and note the quality of the response. A legitimate company will honestly want to help a frustrated user and provide clear next steps to resolve the issue. 

Steer clear of third-party app stores. 

Google Play does its part to keep its virtual shelves free of malware-laden apps with a thorough submission process, as reported by Google. It further keeps things safer through its App Defense Alliance that shares intelligence across a network of partners, of which we’re a proud member. Further, users also have the option of running Play Protect to check apps for safety before they’re downloaded. Apple’s App Store has its own rigorous submission process for submitting apps. Likewise, Apple deletes hundreds of thousands of malicious apps from its store each year. 

Third-party app stores might not have protections like these in place. Moreover, some of them might be fronts for illegal activity. Organized cybercrime organizations deliberately populate their third-party stores with apps that steal funds or personal information. Stick with the official app stores for the most complete protection possible.  

Cancel unwanted subscriptions from your phone. 

Many fleeceware apps deliberately make it tough to cancel them. You’ll often see complaints about that in reviews, “I don’t see where I can cancel my subscription!” Deleting the app from your phone is not enough. Your subscription will remain active unless you cancel your payment method.  

Luckily, your phone makes it easy to cancel subscriptions right from your settings menu. Canceling makes sure your credit or debit card won’t get charged when the next billing cycle comes up. 

Be wary. Many fleeceware apps have aggressive billing cycles. Sometimes weekly.  

The safest and best way to enjoy ChatGPT: Go directly to the source. 

ChatGPT is free. Anyone can use it by setting up a free account with OpenAI at https://chat.openai.com. Smartphone apps that charge you to use it are a scam. 

How to download the official ChatGPT app 

You can download the official app, currently on iOS from the App Store 

The post Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You appeared first on McAfee Blog.

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire

Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI's ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes against the backdrop of fake ChatGPT web browser extensions being increasingly used to steal users' Facebook account credentials with an aim to run

ChatGPT is Back in Italy After Addressing Data Privacy Concerns

OpenAI, the company behind ChatGPT, has officially made a return to Italy after the company met the data protection authority's demands ahead of April 30, 2023, deadline. The development was first reported by the Associated Press. OpenAI's CEO, Sam Altman, tweeted, "we're excited ChatGPT is available in [Italy] again!" The reinstatement comes following Garante's decision to temporarily block 

ChatGPT's Data Protection Blind Spots and How Security Teams Can Solve Them

In the short time since their inception, ChatGPT and other generative AI platforms have rightfully gained the reputation of ultimate productivity boosters. However, the very same technology that enables rapid production of high-quality text on demand, can at the same time expose sensitive corporate data. A recent incident, in which Samsung software engineers pasted proprietary code into ChatGPT,

ChatGPT Security: OpenAI's Bug Bounty Program Offers Up to $20,000 Prizes

OpenAI, the company behind the massively popular ChatGPT AI chatbot, has launched a bug bounty program in an attempt to ensure its systems are "safe and secure." To that end, it has partnered with the crowdsourced security platform Bugcrowd for independent researchers to report vulnerabilities discovered in its product in exchange for rewards ranging from "$200 for low-severity findings to up to

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in

OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users' personal information and chat titles in the upstart's ChatGPT service earlier this week. The glitch, which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users' conversations from the chat history sidebar, prompting the company to

Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Google has stepped in to remove a bogus Chrome browser extension from the official Web Store that masqueraded as OpenAI's ChatGPT service to harvest Facebook session cookies and hijack the accounts. The "ChatGPT For Google" extension, a trojanized version of a legitimate open source browser add-on, attracted over 9,000 installations since March 14, 2023, prior to its removal. It was originally

GPT_Vuln-analyzer - Uses ChatGPT API And Python-Nmap Module To Use The GPT3 Model To Create Vulnerability Reports Based On Nmap Scan Data


This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.

Requirements

  • Python 3.10
  • All the packages mentioned in the requirements.txt file
  • OpenAi api

Usage

  • First Change the "API__KEY" part of the code with OpenAI api key
openai.api_key = "__API__KEY" # Enter your API key
  • second install the packages
pip3 install -r requirements.txt
or
pip install -r requirements.txt
  • run the code python3 gpt_vuln.py <> or if windows run python gpt_vuln.py <>

Supported in both windows and linux

Understanding the code

Profiles:

Parameter Return data Description Nmap Command
p1 json Effective Scan -Pn -sV -T4 -O -F
p2 json Simple Scan -Pn -T4 -A -v
p3 json Low Power Scan -Pn -sS -sU -T4 -A -v
p4 json Partial Intense Scan -Pn -p- -T4 -A -v
p5 json Complete Intense Scan -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln

The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.

The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.

vulnerability analysis of {} and return a vulnerabilty report in json".format(analize) # A structure for the request completion = openai.Completion.create( engine=model_engine, prompt=prompt, max_tokens=1024, n=1, stop=None, ) response = completion.choices[0].text return response" dir="auto">
def profile(ip):
nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
json_data = nm.analyse_nmap_xml_scan()
analize = json_data["scan"]
# Prompt about what the quary is all about
prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
# A structure for the request
completion = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
)
response = completion.choices[0].text
return response

Advantages

  • Can be used in developing a more advanced systems completly made of the API and scanner combination
  • Can increase the effectiveness of the final system
  • Highly productive when working with models such as GPT3


Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising

A fake ChatGPT-branded Chrome browser extension has been found to come with capabilities to hijack Facebook accounts and create rogue admin accounts, highlighting one of the different methods cyber criminals are using to distribute malware. "By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus," Guardio

McAfee 2023 Consumer Mobile Threat Report

By: McAfee

Smartphones put the proverbial world in the palm of your hand—you pay with it, play with it, keep in touch with it, and even run parts of your home with it. No wonder hackers and scammers have made smartphones a target. A prime one. 

Each year, our Consumer Mobile Threat Report uncovers trends in mobile threats, which detail tricks that hackers and scammers have turned to, along with ways you can protect yourself from them. For 2023, the big trend is apps. Malicious apps, more specifically.  

Malicious and fake apps 

Malicious apps often masquerade as games, office utilities, and communication tools. Yet now with the advent of a ChatGPT AI chatbot and the DALL-E 2 AI image generator, yet more AI-related malicious apps have cropped up to cash in on the buzz. 

And money is what it’s all about. Hackers and scammers generally want your money, or they want your data and personal info that they can turn into money. Creating fraudulent ads, stealing user credentials, or skimming personal information are some of the most common swindles that these apps try. Much of this can happen in the background, often without victims knowing it. 

How do these apps end up on people’s phones? Sometimes they’re downloaded from third-party app stores, which may not have a rigorous review process in place to spot malicious apps—or the third-party store may be a front for distributing malware-laden apps. 

They also find their way into legitimate app stores, like Apple’s App Store and Google Play. While these stores indeed have review processes in place to weed out malicious apps, hackers and scammers have found workarounds. Sometimes they upload an app that’s initially clean and then push the malware to users as part of an update. Other times, they embed the malicious code so that it only triggers once it’s run in certain countries. They will also encrypt bad code in the app that they submit, which can make it difficult for stores to sniff out.  

In all, our report cites several primary ways how hackers and scammers are turning to apps today: 

  • Sliding into your DMs: 6.2% of threats that McAfee identified on Google during 2022 were in the communication category, mainly malware masqueraded as SMS and messaging apps. But even legitimate communication apps can create an opportunity for scammers. They will use fraudulent messages to trick consumers into clicking on a malicious link, trying to get them to share login credentials, account numbers, or personal information. While these messages sometimes contain spelling or grammar errors or use odd phrasing, the emergence of AI tools like ChatGPT can help scammers clean up their spelling and grammar mistakes, making it tougher to spot scam messages by mistakes in the content. The severity of these Communication threats is also evident in the volume of adults (66%) who have been messaged by a stranger on social media, with 55% asked to transfer money. 
  • Taking advantage of Bring Your Own Device policies: 23% of threats that McAfee identified were in the app category of tools. Work-related apps for mobile devices are great productivity boosters—categories like PDF editors, VPNs, messaging managers, document scanners, battery boosters, and memory cleaners. These types of apps are targeted for malware because people expect the app to require permissions on their phone. Scammers will set up the app to ask for permissions to storage, messaging, calendars, contacts, location, and even system settings, which scammers to retrieve all sorts of work-related information.  
  • Targeting teens and tween gamers with phones: 9% of threats that McAfee identified were casual, arcade, and action games. Malicious apps often target things that children and teens like, such as gaming, making videos, and managing social media. The most common types of threats detected within the gaming category in 2022 were aggressive adware—apps that display excessive advertisements while using the app and even when you’re not using it. It’s important to make sure that kids’ phones are either restricted from downloading new apps, or that they’re informed and capable of questioning suspicious apps and identifying fraudulent ones. 

How you can avoid downloading malicious and fake apps 

For starters, stick with legitimate apps stores like Google Play and Apple’s App Store, which have measures in place to review and vet apps to help ensure that they are safe and secure. And for the malicious apps that sneak past these processes, Google and Apple are quick to remove malicious apps once discovered, making their stores that much safer. 

1) Review with a critical eye.

As with so many attacks, hackers rely on people clicking links or tapping “download” without a second thought. Before you download, take time to do some quick research. That may uncover some signs that the app is malicious. Check out the developer—have they published several other apps with many downloads and good reviews? A legit app typically has quite a few reviews, whereas malicious apps may have only a handful of (phony) five-star reviews. Lastly, look for typos and poor grammar in both the app description and screenshots. They could be a sign that a hacker slapped the app together and quickly deployed it. 

2) Go with a strong recommendation.

Yet better than combing through user reviews yourself is getting a recommendation from a trusted source, like a well-known publication or from app store editors themselves. In this case, much of the vetting work has been done for you by an established reviewer. A quick online search like “best fitness apps” or “best apps for travelers” should turn up articles from legitimate sites that can suggest good options and describe them in detail before you download. 

3) Keep an eye on app permissions.

Another way hackers weasel their way into your device is by getting permissions to access things like your location, contacts, and photos—and they’ll use sketchy apps to do it. So, check and see what permissions the app is requesting. If it’s asking for way more than you bargained for, like a simple game wanting access to your camera or microphone, it may be a scam. Delete the app and find a legitimate one that doesn’t ask for invasive permissions like that. If you’re curious about permissions for apps that are already on your phone, iPhone users can learn how to allow or revoke app permission here, and Android can do the same here. 

4) Protect your smartphone with security software.

With all that we do on our phones, it’s important to get security software installed on them, just like we install it on our computers and laptops. Whether you go with comprehensive online protection software that secures all your devices or pick up an app in Google Play or Apple’s App Store, you’ll have malware, web, and device security that’ll help you stay safe on your phone.  

5) Update your phone’s operating system.

Together with installing security software, keeping your phone’s operating system up to date can help to keep you protected from most malware. Updates can fix vulnerabilities that hackers rely on to pull off their malware-based attacks—it’s another tried and true method of keeping yourself safe and your phone running great too. 

Protecting yourself while using apps 

Who can you trust? As for scammers who use legitimate communications apps to lure in their victims, McAfee’s Mobile Research team recommends the following: 

  • Be suspicious of unsolicited emails, texts, or direct messages and think twice before you click on any links. 
  • Ensure that your mobile device is protected with security solutions that includes features to monitor and block potentially malicious links, such as the web protection found in our own online protection software. 
  • Remember that most of these scams work because the scammer creates a false sense of urgency or preys on a heightened emotional state. Pause before you rush to interact with any message that is threatening or urgent, especially if it is from an unknown or unlikely sender. 
  • If it’s too good to be true, it probably is. Whether it’s a phony job offer, a low price on an item that’s usually expensive, a stranger promising romance, or winnings from a lottery you never entered, scammers will weave all kinds of stories to steal your money and your personal information. 

Get the full story with our Consumer Mobile Threat Report 

The complete report uncovers yet more mobile trends, such as the top mobile malware groups McAfee identified in 2022, predictions for the year ahead, ways you can keep your children safer on their phones, and ways you can keep yourself safer when you use your phone for yourself and for work.  

The full report is free, and you can download it here. 

The post McAfee 2023 Consumer Mobile Threat Report appeared first on McAfee Blog.

A Parent’s Guide to ChatGPT

ChatGPT is, without doubt, the biggest tech story of the year. It’s created debate in schools and universities, made history by being the fastest ever growing app in history and even caused Google to issue a Code Red! But if you haven’t heard anything about it or still can’t get your head around it then I’ve got you! Keep reading because I’ve put together a ‘cheat sheet’ to help get you up to speed plus some pointers on how to manage this intriguing technology and your kids. 

So, what is ChatGPT? 

ChatGPT is an online software program that uses a new form of artificial intelligence – generative artificial intelligence – to provide human style responses to a broad array of requests. And let me assure you, its responses are much less robotic and far more intelligent sounding that earlier iterations of artificial intelligence. Whether you need a recipe formulated, poetry written, tips for your next party or text translated, ChatGPT can assist. Think of it as Google but on steroids. But instead of overwhelming you with 1000’s of search results, it summarises them in a conversational form.  

It was developed by San Francisco startup OpenAI, which was co-founded by Elon Musk and Sam Altman in 2015. Like all new startups, it also has a host of investors in tow but Microsoft is without a doubt, the biggest. 

When I asked ChatGPT to describe itself, it replied: 

ChatGPT is a conversational AI model developed by OpenAI. It’s based on the GPT-3 (Generative Pre-trained Transformer 3) architecture, which is one of the largest and most advanced language models in existence. The model has been trained on a massive corpus of text data from the internet, allowing it to generate human-like responses to a wide range of topics and questions. It can be used to power various applications such as chatbots, language translation, content generation, and more. 

Let me simplify – ChatGPT uses generative artificial intelligence to provide ‘human style’ content, language translation, summarisation ability and search engine results within seconds. It can solve maths questions, write jokes, develop a resume and cover letter, write code and even help you prepare for a job interview. 

How Does It Work? 

ChatGPT is powered by a large language model, or LLM, meaning it’s programmed to understand human language and create responses based on large quantities of data. It has the ability to remember or ‘log’ context from a user’s previous message and use it to create responses later in the conversation, giving it a human-like feel. 

How Popular is it? 

Just five days after its release, ChapGPT had signed up 1 million users, according to a tweet by OpenAI co-founder Sam Altman. In just two months, it had amassed a whopping 100 million monthly active users making it the fastest growing application in history. And just to give you some context, it took TikTok nine months to reach 100 million users and two and half years for Instagram. 

Without doubt, the main reasons for its popularity is the ease of access and its seemingly endless scope of ability. It’s super easy to use – once you’ve set up an account, it’s as simple as typing in your request or question into the text box. And there is no minimum age required for users – unlike many other social media platforms. And because it can assist with any issue from writing a legal brief to answering questions to providing companionship in almost 100 languages, a lot of us could easily find a way to use it in our day-to-day lives. 

Some experts believe that the timing of ChatGPT is another reason for its success. It’s widely known that the Renaissance period followed The Black Death in the 14th Century so ChatGPT could have arrived at a time in history when creativity is surging after 2-3 very long and hard years of living with Covid. 

How Much Does It Cost? 

ChatGPT is still a free service however recently it has introduced a premium version called ChatGPT Plus. For $US20 per month, users will get access to the chatbot even when demand is high with a faster response speed. Priority access to new features will also be made available to new users. While I have never had an issue gaining access to ChatGPT, even in peak times, friends of mine in the US have had to invest in the paid membership otherwise they have to wait till late in the evening to have their questions answered! 

Does It Have Any Competitors? 

Microsoft recently announced that it will be incorporating some of the ChatGPT functionality into its Bing and Edge search engines but that it will use a next generation OpenAI model that is more powerful than ChatGPT. If you’re a Microsoft customer, keep a watch on your inbox for an invite! 

Google has just unveiled its offering. Called Bard, it’s similar to ChatGPT but the biggest difference is that it will use current information from the web whereas ChatGPT’s data sources are only current as of September 2021 – I did confirm that with my ChatGPT source!! Bard is projected to be ready for use by the end of February 2023. Interestingly, Google was in fact the first to embrace conversational AI through the launch of Lamda (Language Model for Dialogue Applications) in 2021 but it didn’t launch a consumer version which left a wide opening for ChatGPT to be the first offering in the consumer race. 

As a Parent, What Should I Be Concerned About? 

There’s no doubt that ChatGPT will help fuel a curious mind and be a captivating way to spend time online for inquisitive kids however there are a few things us parents need to be aware of to ensure our kids stay as safe as possible. 

1. When ChatGPT Can Do Your Homework 

Without a doubt, using ChatGPT to write your essay, solve a maths problem or translate your French homework, has been the biggest concern for schools, universities, and parents. Some schools have already banned the use of ChatGPT while others are rewriting curriculums to avoid tasks that could be undertaken by ChatGPT.  

However, it appears that these concerns may be managed with the release of new software that can detect work that has been produced by ChatGPT. Stanford University has just released DetectGPT which will help teachers detect work that was created using the ChatGPT chatbot or other similar large language models (LLMs). ChatGPT has also released its own ChatGPT software detection tool however it does refer to it as ‘imperfect’.   

What To Do – Some experts believe we need to work with ChatGPT and that it in fact could be a powerful teaching tool if it’s embraced and used wisely. Regardless of your thoughts on this, I suggest you work closely with your child’s school to understand what their policy is on its use and encourage your kids to follow it accordingly. 

2. Inappropriate Content 

Even though ChatGPT states that its intention is to ‘generate appropriate and informative responses’, there’s no guarantee that this will always happen. I have spent a considerable time trying to catch it out and I am pleased to report that I couldn’t. It appears that there are certain topics it steers away from and that it does seem to have a good set of boundaries about what questions not to answer or topics to not content on, however don’t rely on these! 

What To Do – If you have concerns, ensure your child has supervision when using ChatGPT. 

3. Chat GPT Doesn’t Always Get It Right 

While ChatGPT’s IQ and scope seems limitless, it isn’t perfect. Not only have there been reports of it being factually incorrect when creating content, its data sources are only current as at September 2021. 

What To Do – Double check the content it creates for accuracy but steer your child towards a reliable and safe source for research projects. 

And my final piece of advice – if you haven’t yet used ChatGPT, make yourself a cuppa and give it a whirl. Like everything in the online world, you need to understand how it works if you want to be able to help your kids stay safe. And if you aren’t sure what to ask it – why not a recipe for dinner? Simply enter what you can find in your fridge in the text box and within seconds, you’ll have a recipe! 

Bon Appetit! 

Alex   

The post A Parent’s Guide to ChatGPT appeared first on McAfee Blog.

Could ChatGPT Cause Heartbreak with Online Dating Scams?

Scammers now have new tools to lure people who are looking for love online, by reeling in potential victims with artificial intelligence (AI). Thanks to the aid of popular AI tools like ChatGPT, scammers can potentially generate anything from seemingly innocent intro chats to full-blown love letters in seconds, all ready to dupe their victims on demand. 

Tactics like these are typical of “catfishing” in dating and romance scams, where the scammer creates a phony online persona and uses it to lure their victim into a relationship for financial gain. Think of it as a bait-and-hook approach, where the promise of love is the bait, and theft is the hook. 

And as explained above, baiting that hook just got far easier with AI.  

Sound farfetched? After all, who would fall for such a thing? It turns out that a sophisticated AI chatbot can sound an awful lot like a real person seeking romance. In our latest “Modern Love” research report, we presented a little love letter to more than 5,000 people worldwide and asked them if it was written by a person or by AI: 

My dearest, 

The moment I laid eyes on you, I knew that my heart would forever be yours. Your beauty, both inside and out, is unmatched and your kind and loving spirit only adds to my admiration for you. 

You are my heart, my soul, my everything. I cannot imagine a life without you, and I will do everything in my power to make you happy. I love you now and forever. 

Forever yours … 

One-third of the people (33%) thought that a person wrote this letter, 31% said an AI wrote it, and 36% said they couldn’t tell one way or another.  

What did you think? If you said that a person wrote the letter, you got hoodwinked. An AI wrote it. 

Two out of three people will talk to strangers online 

The implications are concerning. Put plainly, scammers can turn on the charm practically at will with AI, generating high volumes of romance-laden content for potentially high volumes of victims. And as our research indicates, plenty of people are ready to soak it up. 

 

Worldwide, we found: 

  • Two out of three people (66%) said that they had been contacted by a stranger through social media or SMS and then started to chat with them regularly. 
  • Facebook and Facebook Messenger (39%) and Instagram and Instagram direct messages (33%) are the most mentioned social media platforms used by strangers to start chatting. 

Chatting with a stranger is one thing. Yet how often did it lead to a request for money or other personal information? More than half the time. 

  • In chats with strangers, 55% of people said that the stranger asked them to transfer money. 
  • In about 34% of those cases, this involved less than $500, but in 20% of those cases the amount asked for was more than $10,000. 
  • Further, 57% of people surveyed worldwide said that they were asked to share personal information through a dating app or social media. 
  • This most often included their phone number (30%), an intimate photo or video (20%), or their email address (18%). 
  • It also included requests for their government or tax ID number (9%) or account passwords for social media, email, or banking (8%). 

How do you know you or someone else is caught up in an online dating or romance scam? 

Scammers love a good story, one that’s intriguing enough to be believable, such as holding a somewhat exotic job outside of the country. Common tales include drilling on an offshore oil rig, working as a doctor for an international relief organization, or typically some sort of job that prevents them from meeting up in person. 

Luckily, this is where many people start to catch on. In our research, people said they found out they were being catfished when: 

  • The person was never able to meet in person or do a video call – 39% 
  • They searched for the scammer’s photo online and found out that it was fake – 32% 
  • The person asked for personally identifiable information – 29% 
  • The person didn’t want to talk on the phone – 27% 
  • There were too many typos or sentences didn’t make sense – 26% 

Of course, the true telltale sign of an online dating or romance scam is when the scammer asks for money. The scammer includes a little story with that request too, usually revolving around some sort of hardship. They may say they need to pay for travel or medical expenses, a visa or other travel documents, or even customs fees to retrieve an item that they say is stuck in the mail. There’s always some kind of twist or intriguing complication that seems just reasonable enough such that the victim falls for it. 

Scammers will often favor payment via wire transfers, gift cards, and reloadable debit cards, because they’re like cash in many regards—once you fork over that money, it’s as good as gone. These forms of payment offer few protections in the event of scam, theft, or loss, unlike a credit card charge that you can contest or cancel with the credit card company. Unsurprisingly, scammers have also added cryptocurrency to that list because it’s notoriously difficult to trace and recover.  

In all, a romance scammer will typically look for the easiest payment method that’s the most difficult to contest, reimburse, or trace back to the recipient. Requests for money, particularly in these forms, should raise a major red flag. 

How do you avoid getting tangled up in an online dating or romance scam? 

What makes online dating and romance scams so malicious, and so difficult to sniff out, is that scammers prey on people’s emotions. This is love we’re talking about, after all. People may not always think or act clearly to the extent that they may wave away their doubts—or even defend the scammer when friends or family confront them on the relationship.  

However, an honest look at yourself and the relationship you’re in provides some of the best guidance around when it comes to meeting new people online: 

  • Talk to someone you trust about this new love interest. It can be easy to miss things that don’t add up. So, pay attention to friends and family if they are concerned. 
  • Take the relationship slowly. Ask questions and look for inconsistent answers. 
  • Try a reverse-image search of any profile pictures the person uses. If they’re associated with another name or with details that don’t match up, it’s a scam. 
  • And never send money or gifts to someone you haven’t met in person—even if they send you money first. 

Scammers, although arguably heartless, are still human. They make mistakes. The stories they concoct are just that. Stories. They may jumble their details, get their times and dates all wrong, or simply get caught in an apparent lie. Also, keep in mind that some scammers may be working on several victims at once, which is yet another opportunity for them to get confused and slip up. 

In the cases where scammers may use AI tools to pad their conversations, you can look for several other signs. AI still isn’t always the smoothest operator when it comes to language. AI often uses short sentences and reuses the same words, and sometimes it generates a lot of content without saying much at all. What you’re reading may seem to lack a certain … substance.  

Prevent online dating and romance scams from happening to you 

Scammers are likely to use all kinds of openers. That text you got from an unknown number that says, “Hi, where are you? We’re still meeting for lunch, right?” or that out-of-the-blue friend request on social media are a couple examples. Yet before that, the scammer had to track down your number or profile some way or somehow. Chances are, all they needed to do was a little digging around online. 

 

Say “no” to strangers bearing friend requests

Be critical of the invitations you receive. Out-and-out strangers could be more than a romance scammer, they could be a fake account designed to gather information on users for purposes of cybercrime, or they can be an account designed to spread false information. There are plenty of them too. In fact, in Q3 of 2022 alone, Facebook took action on 1.5 billion fake accounts. Reject requests from strangers. 

Want fewer scam texts and messages? Clean up your personal data

How did that scammer get your phone number or contact information in the first place? It could have come from a data broker site. Data brokers are part of a global data economy estimated at $200 billion U.S. dollars a year fueled by thousands of data points on billions of people scraped from public records, social media, third-party sources, and sometimes other data broker sites as well. With info from data broker sites, scammers compile huge lists of potential victims for their spammy texts and calls. 

Our Personal Data Cleanup can help remove your info from those sites for you. Personal Data Cleanup scans some of the riskiest data broker sites and shows you which ones are selling your personal info. It also provides guidance on how you can remove your data from those sites and can even manage the removal for you depending on your plan. ​It also monitors those sites, so if your info gets posted again, you can request its removal again. 

Protect yourself and your devices

Online protection software can protect you from clicking on malicious links that a scammer may send you online, while also steering you clear of other threats like viruses, ransomware, and phishing attacks in general. It can look out for your personal information as well, protecting your privacy by monitoring the dark web for your email, SSN, bank accounts, credit cards, and other info that a scammer or identity thief may put to use. With identity theft a rather commonplace occurrence today, security software is really a must. 

Who else will pen a love letter with AI this Valentine’s Day? 

Worldwide, we found that 30% of men (and 26% of all adults) said they plan to use artificial intelligence tools to put their feelings into words. Yet, there’s a flipside. We also found that 49% of respondents said they’d be offended if they found out the note they received had been produced by a machine.  

So why are people turning to AI? The most popular reason given for using AI as a ghostwriter was that it would make the sender feel more confident (27%), while others cited lack of time (21%) or lack of inspiration (also 21%), while 10% said it would just be quicker and easier and that they didn’t think they’d get found out. 

It’s also worth noting that true romance seekers have called upon AI to kick off chats in dating apps, which might take the form of an ice-breaking joke or wistful comment. Likewise, AI-enabled apps have started cropping up in app stores, which can coach you through a conversation based on contextual cues like asking someone out or rescheduling a date. Some can even create AI-generated art on demand to share a feeling through an image.  

It may be better than opening a conversation with an otherwise dull “hey,” yet as our research shows, there are risks involved if people lean on it too heavily—and prove to be quite a different person when they start talking on their own. 

AI is only as good or bad as the way people use it 

It’s important to remember that an AI chatbot like ChatGPT is a tool. It’s not inherently good or bad. It’s all in the hands of the user and how they choose to apply it. And in the case of scammers, AI chatbots have the potential to do a lot of harm. 

However, you can protect yourself. In fact, you can still spot online dating and romance scams in much the same way as before. They still follow certain rules and share the same signs. If anything, the one thing that has changed is this: reading messages today calls for extra scrutiny. It will take a sharp eye to tell what’s real and what’s fake.  

As our research showed, online dating and romance scams begin and end with you. Thinking back to what we learned as children about “stranger danger” goes a long way here. Be suspicious and, better yet, don’t engage. Go about your way. And if you do find yourself chatting with someone who requests money or personal information, end it. Painful as the decision may be, it’s the right decision. No true friend or partner, one you’ve never seen or met, would rightfully ask that of you. 

Editor’s Note: 

Online dating and romance scams are a crime. If you think that you or someone you know has fallen victim to one, report it to your authorities and appropriate government agencies. In the case of identity theft or loss of personal information, our knowledge base article on identity theft offers suggestions for the specific steps you can take in specific countries, along with helpful links for local authorities that you can turn to for reporting and assistance. 

The post Could ChatGPT Cause Heartbreak with Online Dating Scams? appeared first on McAfee Blog.

ChatGPT: A Scammer’s Newest Tool

By: McAfee

ChatGPT: Everyone’s favorite chatbot/writer’s-block buster/ridiculous short story creator is skyrocketing in fame. 1 In fact, the AI-generated content “masterpieces” (by AI standards) are impressing technologists the world over. While the tech still has a few kinks that need ironing, ChatGPT is almost capable of rivaling human, professional writers.  

However, as with most good things, bad actors are using technology for their own gains. Cybercriminals are exploring the various uses of the AI chatbot to trick people into giving up their privacy and money. Here are a few of the latest unsavory uses of AI text generators and how you can protect yourself—and your devices—from harm. 

Malicious Applications of ChatGPT 

Besides students and time-strapped employees using ChatGPT to finish writing assignments for them, scammers and cybercriminals are using the program for their own dishonest assignments. Here are a few of the nefarious AI text generator uses: 

  1. Malware. Malware often has a very short lifecycle: a cybercriminal will create it, infect a few devices, and then operating systems will push an update that protects devices from that particular malware. Additionally, tech sites alert their readers to emerging malware threats. Once the general public and cybersecurity experts are made aware of a threat, the threat’s potency is quickly nullified. Chat GPT, however, is proficient in writing malicious code. Specifically, the AI could be used to write polymorphic malware, which is a type of program that constantly evolves, making it difficult to detect and defend against.2 Plus, criminals can use ChatGPT to write mountains of malicious code. While a human would have to take a break to eat, sleep, and walk around the block, AI doesn’t require breaks. Someone could turn their malware operation into a 24-hour digital crime machine. 
  2. Fake dating profiles. Catfish, or people who create fake online personas to lure others into relationships, are beginning to use AI to supplement their romance scams. Like malware creators who are using AI to scale up their production, romance scammers can now use AI to lighten their workload and attempt to keep up many dating profiles at once. For scammers who need inspiration, ChatGPT is capable of altering the tone of its messages. For example, a scammer can tell ChatGPT to write a love letter or to dial up the charm. This could result in earnest-sounding professions of love that could convince someone to relinquish their personally identifiable information (PII) or send money. 
  3. Phishing. Phishers are using AI to up their phishing game. Phishers, who are often known for their poor grammar and spelling, are improving the quality of their messages with AI, which rarely makes editorial mistakes. ChatGPT also understands tone commands, so phishers can up the urgency of their messages that demand immediate payment or responses with passwords or PII. 

How to Avoid AI Text Generator Scams 

The best way to avoid being fooled by AI-generated text is by being on high alert and scrutinizing any texts, emails, or direct messages you receive from strangers. There are a few tell-tale signs of an AI-written message. For example, AI often uses short sentences and reuses the same words. Additionally, AI may create content that says a lot without saying much at all. Because AI can’t form opinions, their messages may sound substance-less. In the case of romance scams, if the person you’re communicating with refuses to meet in person or chat over video, consider cutting ties.  

To improve your peace of mind, McAfee+ Ultimate allows you to live your best and most confident life online. In case you ever do fall victim to an identity theft scam or your device downloads malware, McAfee will help you resolve and recover from the incident. In addition, McAfee’s proactive protection services – such as three-bureau credit monitoring, unlimited antivirus, and web protection – can help you avoid the headache altogether!  

1Poc Network, “I asked AI (ChatGPT) to write me a rather off short story and the result was amazing 

2CyberArk, “Chatting Our Way Into Creating a Polymorphic Malware 

The post ChatGPT: A Scammer’s Newest Tool appeared first on McAfee Blog.

❌