chaos is an 'origin' IP scanner developed by RST in collaboration with ChatGPT. It is a niche utility with an intended audience of mostly penetration testers and bug hunters.
An origin-IP is a term-of-art expression describing the final public IP destination for websites that are publicly served via 3rd parties. If you'd like to understand more about why anyone might be interested in Origin-IPs, please check out our blog post.
chaos was rapidly prototyped from idea to functional proof-of-concept in less than 24 hours using our principles of DevOps with ChatGPT.
usage: chaos.py [-h] -f FQDN -i IP [-a AGENT] [-C] [-D] [-j JITTER] [-o OUTPUT] [-p PORTS] [-P] [-r] [-s SLEEP] [-t TIMEOUT] [-T] [-v] [-x]
_..._
.-'` `'-.
__|___________|__
\ /
`._ CHAOS _.'
`-------`
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/ \\
/_____________________\\
CHAtgpt Origin-ip Scanner
_______ _______ _______ _______ _______
|\\ /|\\ /|\\ /|\\ /|\\/|
| +---+ | +---+ | +---+ | +---+ | +---+ |
| |H | | |U | | |M | | |A | | |N | |
| |U | | |S | | |A | | |N | | |C | |
| |M | | |E | | |N | | |D | | |O | |
| |A | | |R | | |C | | | | | |L | |
| +---+ | +---+ | +---+ | +---+ | +---+ |
|/_____|\\_____|\\_____|\\_____|\\_____\\
Origin IP Scanner developed with ChatGPT
cha*os (n): complete disorder and confusion
(ver: 0.9.4)
cd path/to/chaos
pip3 install -U pip setuptools virtualenv
virtualenv env
source env/bin/activate
(env) pip3 install -U -r ./requirements.txt
(env) ./chaos.py -h
-h, --help show this help message and exit
-f FQDN, --fqdn FQDN Path to FQDN file (one FQDN per line)
-i IP, --ip IP IP address(es) for HTTP requests (Comma-separated IPs, IP networks, and/or files with IP/network per line)
-a AGENT, --agent AGENT
User-Agent header value for requests
-C, --csv Append CSV output to OUTPUT_FILE.csv
-D, --dns Perform fwd/rev DNS lookups on FQDN/IP values prior to request; no impact to testing queue
-j JITTER, --jitter JITTER
Add a 0-N second randomized delay to the sleep value
-o OUTPUT, --output OUTPUT
Append console output to FILE
-p PORTS, --ports PORTS
Comma-separated list of TCP ports to use (default: "80,443")
-P, --no-prep Do not pre-scan each IP/port w ith `GET /` using `Host: {IP:Port}` header to eliminate unresponsive hosts
-r, --randomize Randomize(ish) the order IPs/ports are tested
-s SLEEP, --sleep SLEEP
Add N seconds before thread completes
-t TIMEOUT, --timeout TIMEOUT
Wait N seconds for an unresponsive host
-T, --test Test-mode; don't send requests
-v, --verbose Enable verbose output
-x, --singlethread Single threaded execution; for 1-2 core systems; default threads=(cores-1) if cores>2
Launch python HTTP server
% python3 -u -m http.server 8001
Serving HTTP on :: port 8001 (http://[::]:8001/) ...
Launch ncat as HTTP on a port detected as SSL; use a loop because --keep-open can hang
% while true; do ncat -lvp 8443 -c 'printf "HTTP/1.0 204 Plaintext OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Listening on [::]:8443
Ncat: Listening on 0.0.0.0:8443
Also launch ncat as SSL on a port that will default to HTTP detection
% while true; do ncat --ssl -lvp 8444 -c 'printf "HTTP/1.0 202 OK\n\n<html></html>\n"'; done
Ncat: Version 7.94 ( https://nmap.org/ncat )
Ncat: Generating a temporary 2048-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 0208 1991 FA0D 65F0 608A 9DAB A793 78CB A6EC 27B8
Ncat: Listening on [::]:8444
Ncat: Listening on 0.0.0.0:8444
Prepare an FQDN file:
% cat ../test_localhost_fqdn.txt
www.example.com
localhost.example.com
localhost.local
localhost
notreally.arealdomain
Prepare an IP file / list:
% cat ../test_localhost_ips.txt
127.0.0.1
127.0.0.0/29
not_an_ip_addr
-6.a
=4.2
::1
Run the scan
% ./chaos.py -f ../test_localhost_fqdn.txt -i ../test_localhost_ips.txt,::1/126 -p 8001,8443,8444 -x -s0.2 -t1
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost.local
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: localhost
2023-06-21 12:48:33 [WARN] Ignoring invalid FQDN value: notreally.arealdomain
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block =4.2
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block -6.a
2023-06-21 12:48:33 [WARN] Error: invalid IP address or CIDR block not_an_ip_addr
2023-06-21 12:48:33 [INFO] * ---- <META> ---- *
2023-06-21 12:48:33 [INFO] * Version: 0.9.4
2023-06-21 12:48:33 [INFO] * FQDN file: ../test_localhost_fqdn.txt
2023-06-21 12:48:33 [INFO] * FQDNs loaded: ['www.example.com', 'localhost.example.com']
2023-06-21 12:48:33 [INFO] * IP input value(s): ../test_localhost_ips.txt,::1/126
2023-06-21 12:48:33 [INFO] * Addresses pars ed from IP inputs: 12
2023-06-21 12:48:33 [INFO] * Port(s): 8001,8443,8444
2023-06-21 12:48:33 [INFO] * Thread(s): 1
2023-06-21 12:48:33 [INFO] * Sleep value: 0.2
2023-06-21 12:48:33 [INFO] * Timeout: 1.0
2023-06-21 12:48:33 [INFO] * User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 ch4*0s/0.9.4
2023-06-21 12:48:33 [INFO] * ---- </META> ---- *
2023-06-21 12:48:33 [INFO] 36 unique address/port addresses for testing
Prep Tests: 100%|█████████████████████████████████████████████████████████████████&# 9608;██████████████████████████████████████████████████████████████████████████████| 36/36 [00:29<00:00, 1.20it/s]
2023-06-21 12:49:03 [INFO] 9 IP/ports verified, reducing test dataset from 72 entries
2023-06-21 12:49:03 [INFO] 18 pending tests remain after pre-testing
2023-06-21 12:49:03 [INFO] Queuing 18 threads
++RCVD++ (200 OK) www.example.com @ :::8001
++RCVD++ (204 Plaintext OK) www.example.com @ :::8443
++RCVD++ (202 OK) www.example.com @ :::8444
++RCVD++ (200 OK) www.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ ::1:8443
++RCVD++ (202 OK) www.example.com @ ::1:8444
++RCVD++ (200 OK) www.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) www.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) www.example.com @ 127.0.0.1:8444
++RCVD++ (200 OK) localhost.example.com @ :::8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ :::8443
++RCVD+ + (202 OK) localhost.example.com @ :::8444
++RCVD++ (200 OK) localhost.example.com @ ::1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ ::1:8443
++RCVD++ (202 OK) localhost.example.com @ ::1:8444
++RCVD++ (200 OK) localhost.example.com @ 127.0.0.1:8001
++RCVD++ (204 Plaintext OK) localhost.example.com @ 127.0.0.1:8443
++RCVD++ (202 OK) localhost.example.com @ 127.0.0.1:8444
Origin Scan: 100%|█████████████████████████████████████████████████████████████████████████████████████` 08;█████████████████████████████████████████████████████████| 18/18 [00:06<00:00, 2.76it/s]
2023-06-21 12:49:09 [RSLT] Results from 5 FQDNs:
::1
::1:8444 => (202 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8001 => (200 / OK)
127.0.0.1
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)
::
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
www.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)
localhost.example.com
:::8001 => (200 / OK)
:::8443 => (204 / Plaintext OK)
:::8444 => (202 / OK)
::1:8001 => (200 / OK)
::1:8443 => (204 / Plaintext OK)
::1:8444 => (202 / OK)
127.0.0.1:8001 => (200 / OK)
127.0.0.1:8443 => (204 / Plaintext OK)
127.0.0.1:8444 => (202 / OK)
rst@r57 chaos %
-T
runs in test mode (do everything except send requests)
-v
verbose option provides additional output
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”
Image: SlashNext.com.
The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.
WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.
“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.
“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”
A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.
“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.
Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.
Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.
That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.
Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.
“You can ask me anything,” Morais said. “I’m an open book.”
Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.
“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of learning and answer accuracy.”
WormGPT isn’t the only rogue ChatGPT clone advertised as friendly to malware writers and cybercriminals. According to SlashNext, one unsettling trend on the cybercrime forums is evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT.
“These ‘jailbreaks’ are specialised prompts that are becoming increasingly common,” Kelley wrote. “They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.”
Morais said they have been using the GPT-J 6B model since the service was launched, although he declined to discuss the source of the LLMs that power WormGPT. But he said the data set that informs WormGPT is enormous.
“Anyone that tests wormgpt can see that it has no difference from any other uncensored AI or even chatgpt with jailbreaks,” Morais explained. “The game changer is that our dataset [library] is big.”
Morais said he began working on computers at age 13, and soon started exploring security vulnerabilities and the possibility of making a living by finding and reporting them to software vendors.
“My story began in 2013 with some greyhat activies, never anything blackhat tho, mostly bugbounty,” he said. “In 2015, my love for coding started, learning c# and more .net programming languages. In 2017 I’ve started using many hacking forums because I have had some problems home (in terms of money) so I had to help my parents with money… started selling a few products (not blackhat yet) and in 2019 I started turning blackhat. Until a few months ago I was still selling blackhat products but now with wormgpt I see a bright future and have decided to start my transition into whitehat again.”
WormGPT sells licenses via a dedicated channel on Telegram, and the channel recently lamented that media coverage of WormGPT so far has painted the service in an unfairly negative light.
“We are uncensored, not blackhat!” the WormGPT channel announced at the end of July. “From the beginning, the media has portrayed us as a malicious LLM (Language Model), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.”
It turns out, when you advertise an online service for doing bad things, people tend to show up with the intention of doing bad things with it. WormGPT’s front man Last seems to have acknowledged this at the service’s initial launch, which included the disclaimer, “We are not responsible if you use this tool for doing bad stuff.”
But lately, Morais said, WormGPT has been forced to add certain guardrails of its own.
“We have prohibited some subjects on WormGPT itself,” Morais said. “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat. In the last weeks we have been blocking some subjects from being discussed on WormGPT.”
Still, Last has continued to state on HackForums — and more recently on the far more serious cybercrime forum Exploit — that WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” (FUD) by virtually all of the major antivirus makers (AVs).
“You can easily buy WormGPT and ask it for a Rust malware script and it will 99% sure be FUD against most AVs,” Last told a forum denizen in late July.
Asked to list some of the legitimate or what he called “white hat” uses for WormGPT, Morais said his service offers reliable code, unlimited characters, and accurate, quick answers.
“We used WormGPT to fix some issues on our website related to possible sql problems and exploits,” he explained. “You can use WormGPT to create firewalls, manage iptables, analyze network, code blockers, math, anything.”
Morais said he wants WormGPT to become a positive influence on the security community, not a destructive one, and that he’s actively trying to steer the project in that direction. The original HackForums thread pimping WormGPT as a malware writer’s best friend has since been deleted, and the service is now advertised as “WormGPT – Best GPT Alternative Without Limits — Privacy Focused.”
“We have a few researchers using our wormgpt for whitehat stuff, that’s our main focus now, turning wormgpt into a good thing to [the] community,” he said.
It’s unclear yet whether Last’s customers share that view.
Are you skeptical about mainstream artificial intelligence? Or are you all in on AI and use it all day, every day?
The emergence of AI in daily life is streamlining workdays, homework assignments, and for some, personal correspondences. To live in a time where we can access this amazing technology from the smartphones in our pockets is a privilege; however, overusing AI or using it irresponsibly could cause a chain reaction that not only affects you but your close circle and society beyond.
Here are four tips to help you navigate and use AI responsibly.
Artificial intelligence certainly earns the “intelligence” part of its name, but that doesn’t mean it never makes mistakes. Make sure to proofread or review everything AI creates, be it written, visual, or audio content.
For instance, if you’re seeking a realistic image or video, AI often adds extra fingers and distorts faces. Some of its creations can be downright nightmarish! Also, there’s a phenomenon known as an AI hallucination. This occurs when the AI doesn’t admit that it doesn’t know the answer to your question. Instead, it makes up information that is untrue and even fabricates fake sources to back up its claims.
One AI hallucination landed a lawyer in big trouble in New York. The lawyer used ChatGPT to write a brief, but he didn’t double check the AI’s work. It turns out the majority of the brief was incorrect.1
Whether you’re a blogger with thousands of readers or you ask AI to write a little blurb to share amongst your friends or coworkers, it is imperative to edit everything that an AI tool generates. Not doing so could start a rumor based on a completely false claim.
If you use AI to do more than gather a few rough ideas, you should cite the tool you used as a source. Passing off an AI’s work as your own could be considered cheating in the eyes of teachers, bosses, or critics.
There’s a lot of debate about whether AI has a place in the art world. One artist entered an image to a photography contest that he secretly created with AI. When his submission won the contest, the photographer revealed AI’s role in the image and gave up his prize. The photographer intentionally kept AI out of the conversation to prove a point, but imagine if he kept the image’s origin to himself.2 Would that be fair? When other photographers had to wait for the perfect angle of sunlight or catch a fleeting moment in time, should an AI-generated image with manufactured lighting and static subjects be judged the same way?
Even if you don’t personally use AI, you’re still likely to encounter it daily, whether you realize it or not. AI-generated content is popular on social media, like the deepfake video game battles between politicians.3 (A deepfake is a manipulation of a photo, video, or audio clip that depicts something that never happened.) The absurdity of this video series is likely to tip off the viewer to its playful intent, though it’s best practice to add a disclaimer to any deepfake.
Some deepfake have a malicious intent on top of looking and sounding very realistic. Especially around election time, fake news reports are likely to swirl and discredit the candidates. A great rule of thumb is: If it seems too fantastical to be true, it likely isn’t. Sometimes all it takes is five minutes to guarantee the authenticity of a social media post, photo, video, or news report. Think critically about the authenticity of the report before sharing. Fake news reports spread quickly, and many are incendiary in nature.
According to “McAfee’s Modern Love Research Report,” 26% of respondents said they would use AI to write a love note; however, 49% of people said that they’d feel hurt if their partner tasked a machine with writing a love note instead of writing one with their own human heart and soul.
Today’s AI is not sentient. That means that even if the final output moved you to tears or to laugh out loud, the AI itself doesn’t truly understand the emotions behind what it creates. It’s simply using patterns to craft a reply to your prompt. Hiding or funneling your true feelings into a computer program could result in a shaky and secretive relationship.
Plus, if everyone relied upon AI content generation tools like ChatGPT, Bard, and Copy.ai, then how can we trust any genuine display of emotion? What would the future of novels, poetry, and even Hollywood look like?
Responsible AI is a term that governs the responsibilities programmers have to society to ensure they populate AI systems with bias-free and accurate data. OpenAI (the organization behind ChatGPT and DALL-E) vows to act in “the best interests of humanity.”4 From there, the everyday people who interact with AI must similarly act in the best interests of themselves and those around them to avoid unleashing the dangers of AI upon society.
The capabilities of AI are vast, and the technology is getting more sophisticated by the day. To ensure that the human voice and creative spirit doesn’t permanently take on a robotic feel, it’s best to use AI in moderation and be open with others about how you use it.
To give you additional peace of mind, McAfee+ can restore your online privacy and identity should you fall into an AI-assisted scam. With identity restoration experts and up to $2 million in identity theft coverage, you can feel better about navigating this new dimension in the online world.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
2ARTnews, “Artist Wins Photography Contest After Submitting AI-Generated Image, Then Forfeits Prize”
3Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”
4OpenAI, “OpenAI Charter”
The post Four Ways To Use AI Responsibly appeared first on McAfee Blog.
Artificial intelligence used to be reserved for the population’s most brilliant scientists and isolated in the world’s top laboratories. Now, AI is available to anyone with an internet connection. Tools like ChatGPT, Voice.ai, DALL-E, and others have brought AI into daily life, but sometimes the terms used to describe their capabilities and inner workings are anything but mainstream.
Here are 10 common terms you’ll likely to hear in the same sentence as your favorite AI tool, on the nightly news, or by the water cooler. Keep this AI dictionary handy to stay informed about this popular (and sometimes controversial) topic.
AI-generated content is any piece of written, audio, or visual media that was created partially or completely by an artificial intelligence-powered tool.
If someone uses AI to create something, it doesn’t automatically mean they cheated or irresponsibly cut corners. AI is often a great place to start when creating outlines, compiling thought-starters, or seeking a new way of looking at a problem.
When your question stumps an AI, it doesn’t always admit that it doesn’t know the answer. So, instead of not giving an answer, it’ll make one up that it thinks you want to hear. This made-up answer is known as an AI hallucination.
One real-world case of a costly AI hallucination occurred in New York where a lawyer used ChatGPT to write a brief. The brief seemed complete and cited its sources, but it turns out that none of the sources existed.1 It was all a figment of the AI’s “imagination.”
To understand the term black box, imagine the AI as a system of cogs, pulleys, and conveyer belts housed within a box. In a see-through box, you can see how the input is transformed into the final product; however, some AI are referred to as a black box. That means you don’t know how the AI arrived at its conclusions. The AI completely hides its reasoning process. A black box can be a problem if you’d like to doublecheck the AI’s work.
Deepfake is the manipulation of a photo, video, or audio clip to portray events that never happened. Often used for humorous social media skits and viral posts, unsavory characters are also leveraging deepfake to spread fake news reports or scam people.
For example, people are inserting politicians into unflattering poses and photo backgrounds. Sometimes the deepfake is intended to get a laugh, but other times the deepfake creator intends to spark rumors that could lead to dissent or tarnish the reputation of the photo subject. One tip to spot a deepfake image is to look at the hands and faces of people in the background. Deepfakes often add or subtract fingers or distort facial expressions.
AI-assisted audio impersonations – which are considered deepfakes – are also rising in believability. According to McAfee’s “Beware the Artificial Imposter” report, 25% of respondents globally said that a voice scam happened either to themselves or to someone they know. Seventy-seven percent of people who were targeted by a voice scam lost money as a result.
The closer an AI’s thinking process is to the human brain, the more accurate the AI is likely to be. Deep learning involves training an AI to reason and recall information like a human, meaning that the machine can identify patterns and make predictions.
Explainable AI – or white box – is the opposite of black box AI. An explainable AI model always shows its work and how it arrived at its conclusion. Explainable AI can boost your confidence in the final output because you can doublecheck what went into the answer.
Generative AI is the type of artificial intelligence that powers many of today’s mainstream AI tools, like ChatGPT, Bard, and Craiyon. Like a sponge, generative AI soaks up huge amounts of data and recalls it to inform every answer it creates.
Machine learning is integral to AI, because it lets the AI learn and continually improve. Without explicit instructions to do so, machine learning within AI allows the AI to get smarter the more it’s used.
People must not only use AI responsibly, but the people designing and programming AI must do so responsibly, too. Technologists must ensure that the data the AI depends on is accurate and free from bias. This diligence is necessary to confirm that the AI’s output is correct and without prejudice.
Sentient is an adjective that means someone or some thing is aware of feelings, sensations, and emotions. In futuristic movies depicting AI, the characters’ world goes off the rails when the robots become sentient, or when they “feel” human-like emotions. While it makes for great Hollywood drama, today’s AI is not sentient. It doesn’t empathize or understand the true meanings of happiness, excitement, sadness, or fear.
So, even if an AI composed a short story that is so beautiful it made you cry, the AI doesn’t know that what it created was touching. It was just fulfilling a prompt and used a pattern to determine which word to choose next.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
The post 10 Artificial Intelligence Buzzwords You Should Know appeared first on McAfee Blog.
It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me.
The most famous of these mainstream AI tools are ChatGPT, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work?
Here’s a simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools.
Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Today’s Gen AI’s evolved from the use of chatbots in the 1960s. Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.
Think of generative AI as a sponge that desperately wants to delight the users who ask it questions.
First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes worth of facts. The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.
From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes.
Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions. So, while the technology is certainly smart, it’s not exactly creative.
The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double-check your work!
One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist. This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.
Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. However, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.
The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgment and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content.
Technology can also help protect against voice cloning attacks. Tools like McAfee Deepfake Detector, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses.
The post What Is Generative AI and How Does It Work? appeared first on McAfee Blog.
ReconAIzer is a powerful Jython extension for Burp Suite that leverages OpenAI to help bug bounty hunters optimize their recon process. This extension automates various tasks, making it easier and faster for security researchers to identify and exploit vulnerabilities.
Once installed, ReconAIzer add a contextual menu and a dedicated tab to see the results:
Follow these steps to install the ReconAIzer extension on Burp Suite:
ReconAIzer.py
file in Step 3.1. Select the file and click "Open."Congratulations! You have successfully installed the ReconAIzer extension in Burp Suite. You can now start using it to enhance your bug bounty hunting experience.
Once it's done, you must configure your OpenAI API key on the "Config" tab under "ReconAIzer" tab.
Feel free to suggest prompts improvements or anything you would like to see on ReconAIzer!
Happy bug hunting!
The dystopian 2020s, ’30s, and ’40s depicted in novels and movies written and produced decades ago blessedly seem very far off from the timeline of reality. Yes, we have refrigerators that suggest grocery lists, and we have devices in our pockets that control seemingly every function of our homes. But there aren’t giant robots roaming the streets bellowing orders.
Humans are still very much in control of society. To keep it that way, we must use the latest technological advancements for their main intended purpose: To boost convenience in our busy lives.
The future of technology is bright. With the right attitude, security tools, and a reminder every now and then to look up from our devices, humans will be able to enjoy everything the future of technology holds.
A look into the future of technology would be incomplete without touching on the applications and impact of artificial intelligence (AI) on everyday tasks. Platforms like ChatGPT , Voice.ai, and Craiyon have thrilled, shocked, and unnerved the world in equal measures. AI has already transformed work life, home life, and free time of everyday people everywhere.
According to McAfee’s Modern Love Research Report, 26% of people would use AI to aid in composing a love note. Plus, more than two-thirds of those surveyed couldn’t tell the difference between a love note written by AI and a human. AI can be a good tool to generate ideas, but replacing genuine human emotion with the words of a computer program could create a shaky foundation for a relationship.
The Center for AI Safety urges that humans must take an active role in using AI responsibly. Cybercriminals and unsavory online characters are already using it maliciously to gain financially and spread incendiary misinformation. For example, AI-generated voice imposters are scamming concerned family members and friends with heartfelt pleas for financial help with a voice that sounds just like their loved one. Voice scams are turning out to be fruitful for scammers: 77% of people polled who received a cloned voice scam lost money as a result.
Even people who aren’t intending mischief can cause a considerable amount when they use AI to cut corners. One lawyer’s testimony went awry when his research partner, ChatGPT, when rogue and completely made up its justification.1 This phenomenon is known as an AI hallucination. It occurs when ChatGPT or other similar AI content generation tool doesn’t know the answer to your question, so it fabricates sources and asserts you that it’s giving you the truth.
Overreliance on ChatGPT’s output and immediately trusting it as truth can lead to an internet rampant with fake news and false accounts. Keep in mind that using ChatGPT introduces risk in the content creation process. Use it responsibly.
Though it’s powered by AI and could fall under the AI section above, deepfake is exploding and deserves its own spotlight. Deepfake technology is the manipulation of videos to digitally transform one person’s appearance resemble someone else, usually a public figure. Deepfake videos are often accompanied by AI-altered voice tracks. Deepfake challenges the accuracy of the common saying, “Seeing is believing.” Now, it’s more difficult than ever to separate fact from fiction.
Not all deepfake uses are nefarious. Deepfake could become a valuable tool in special effects and editing for the gaming and film industries. Additionally, fugitive sketch artists could leverage deepfake to create ultra-realistic portraits of wanted criminals. If you decide to use deepfake to add some flair to your social media feed or portfolio, make sure to add a disclaimer that you altered reality with the technology.
Currently, it’s estimated that there are more than 15 billion connected devices in the world. A connected device is defined as anything that connects to the internet. In addition to smartphones, computers, and tablets, connected devices also extend to smart refrigerators, smart lightbulbs, smart TVs, virtual home assistants, smart thermostats, etc. By 2030, there may be as many as 29 billion connected devices.2
The growing number of connected devices can be attributed to our desire for convenience. The ability to remote start your car on a frigid morning from the comfort of your home would’ve been a dream in the 1990s. Checking your refrigerator’s contents from the grocery store eliminates the need for a pesky second trip to pick up the items you forgot the first time around.
The downfall of so many connected devices is that it presents crybercriminals literally billions of opportunities to steal people’s personally identifiable information. Each device is a window into your online life, so it’s essential to guard each device to keep cybercriminals away from your important personal details.
With the widespread adoption of email, then cellphones, and then social media in the ’80s, ’90s and early 2000s, respectively, people have integrated technology into their daily lives that better helps them connect with other people. More recent technological innovations seem to trend toward how to connect people to their other devices for a seamless digital life.
We shouldn’t ignore that the more devices and online accounts we manage, the more opportunities cybercriminals have to weasel their way into your digital life and put your personally identifiable information at risk. To protect your online privacy, devices, and identity, entrust your digital safety to McAfee+. McAfee+ includes $1 million in identity theft coverage, virtual private network (VPN), Personal Data Cleanup, and more.
The future isn’t a scary place. It’s a place of infinite technological possibilities! Explore them confidently with McAfee+ by your side.
1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT”
2Statista, “Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030”
The post The Future of Technology: AI, Deepfake, & Connected Devices appeared first on McAfee Blog.
Anyone can try ChatGPT for free. Yet that hasn’t stopped scammers from trying to cash in on it.
A rash of sketchy apps have cropped up in Apple’s App Store and Google Play. They pose as Chat GPT apps and try to fleece smartphone owners with phony subscriptions.
Yet you can spot them quickly when you know what to look for.
ChatGPT is an AI-driven chatbot service created by OpenAI. It lets you have uncannily human conversations with an AI that’s been programmed and fed with information over several generations of development. Provide it with an instruction or ask it a question, and the AI provides a detailed response.
Unsurprisingly, it has millions of people clamoring to use it. All it takes is a single prompt, and the prompts range far and wide.
People ask ChatGPT to help them write cover letters for job interviews, make travel recommendations, and explain complex scientific topics in plain language. One person highlighted how they used ChatGPT to run a tabletop game of Dungeons & Dragons for them. (If you’ve ever played, you know that’s a complex task that calls for a fair share of cleverness to keep the game entertaining.)
That’s just a handful of examples. As for myself, I’ve been using ChatGPT in the kitchen. My family and I have been digging into all kinds of new recipes thanks to its AI.
So, where do the scammers come in?
Scammers, have recently started posting copycat apps that look like they are powered by ChatGPT but aren’t. What’s more, they charge people a fee to use them—a prime example of fleeceware. OpenAI, the makers of ChatGPT, have just officially launched their iOS app for U.S. iPhone users and can be downloaded from the Apple App Store here. The official Android version is still yet to be released.
Fleeceware mimics a pre-existing service that’s free or low-cost and then charges an excessive fee to use it. Basically, it’s a copycat. An expensive one at that.
Fleeceware scammers often lure in their victims with “a free trial” that quickly converts into a subscription. However, with fleeceware, the terms of the subscription are steep. They might bill the user weekly, and at rates much higher than the going rate.
The result is that the fleeceware app might cost the victim a few bucks before they can cancel it. Worse yet, the victim might forget about the app entirely and run up hundreds of dollars before they realize what’s happening. Again, all for a simple app that’s free or practically free elsewhere.
What makes fleeceware so tricky to spot is that it can look legit at first glance. Plenty of smartphone apps offer subscriptions and other in-app purchases. In effect, fleeceware hides in plain sight among the thousands of other legitimate apps in the hopes you’ll download it.
With that, any app that charges a fee to use ChatGPT is fleeceware. ChatGPT offers basic functionality that anyone can use for free.
There is one case where you might pay a fee to use ChatGPT. It has its own subscription-level offering, ChatGPT Plus. With a subscription, ChatGPT responds more quickly to prompts and offers access during peak hours when free users might be shut out. That’s the one legitimate case where you might pay to use it.
In all, more and more people want to take ChatGPT for a spin. However, they might not realize it’s free. Scammers bank on that, and so we’ve seen a glut of phony ChatGPT apps that aim to install fleeceware onto people’s phones.
Read the description of the app and see what the developer is really offering. If the app charges you to use ChatGPT, it’s fleeceware. Anyone can use ChatGPT for free by setting up an account at its official website, https://chat.openai.com.
Reviews can tell you quite a bit about an app. They can also tell you the company that created it handles customer feedback.
In the case of fleeceware, you’ll likely see reviews that complain about sketchy payment terms. They might mention three-day trials that automatically convert to pricey monthly or weekly subscriptions. Moreover, they might describe how payment terms have changed and become more costly as a result.
In the case of legitimate apps, billing issues can arise from time to time, so see how the company handles complaints. Companies in good standing will typically provide links to customer service where people can resolve any issues they have. Company responses that are vague, or a lack of responses at all, should raise a red flag.
Scammers are smart. They’ll count on you to look at an overall good review of 4/5 stars or more and think that’s good enough. They know this, so they’ll pack their app landing page with dozens and dozens of phony and fawning reviews to make the app look legitimate. This tactic serves another purpose: it hides the true reviews written by actual users, which might be negative because the app is a scam.
Filter the app’s reviews for the one-star reviews and see what concerns people have. Do they mention overly aggressive billing practices, like the wickedly high prices and weekly billing cycles mentioned above? That might be a sign of fleeceware. Again, see if the app developer responded to the concerns and note the quality of the response. A legitimate company will honestly want to help a frustrated user and provide clear next steps to resolve the issue.
Google Play does its part to keep its virtual shelves free of malware-laden apps with a thorough submission process, as reported by Google. It further keeps things safer through its App Defense Alliance that shares intelligence across a network of partners, of which we’re a proud member. Further, users also have the option of running Play Protect to check apps for safety before they’re downloaded. Apple’s App Store has its own rigorous submission process for submitting apps. Likewise, Apple deletes hundreds of thousands of malicious apps from its store each year.
Third-party app stores might not have protections like these in place. Moreover, some of them might be fronts for illegal activity. Organized cybercrime organizations deliberately populate their third-party stores with apps that steal funds or personal information. Stick with the official app stores for the most complete protection possible.
Many fleeceware apps deliberately make it tough to cancel them. You’ll often see complaints about that in reviews, “I don’t see where I can cancel my subscription!” Deleting the app from your phone is not enough. Your subscription will remain active unless you cancel your payment method.
Luckily, your phone makes it easy to cancel subscriptions right from your settings menu. Canceling makes sure your credit or debit card won’t get charged when the next billing cycle comes up.
Be wary. Many fleeceware apps have aggressive billing cycles. Sometimes weekly.
ChatGPT is free. Anyone can use it by setting up a free account with OpenAI at https://chat.openai.com. Smartphone apps that charge you to use it are a scam.
You can download the official app, currently on iOS from the App Store
The post Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You appeared first on McAfee Blog.
This is a Proof Of Concept application that demostrates how AI can be used to generate accurate results for vulnerability analysis and also allows further utilization of the already super useful ChatGPT.
openai.api_key = "__API__KEY" # Enter your API key
pip3 install -r requirements.txt
or
pip install -r requirements.txt
Supported in both windows and linux
Profiles:
Parameter | Return data | Description | Nmap Command |
---|---|---|---|
p1 | json | Effective Scan | -Pn -sV -T4 -O -F |
p2 | json | Simple Scan | -Pn -T4 -A -v |
p3 | json | Low Power Scan | -Pn -sS -sU -T4 -A -v |
p4 | json | Partial Intense Scan | -Pn -p- -T4 -A -v |
p5 | json | Complete Intense Scan | -Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln |
The profile is the type of scan that will be executed by the nmap subprocess. The Ip or target will be provided via argparse. At first the custom nmap scan is run which has all the curcial arguments for the scan to continue. nextly the scan data is extracted from the huge pile of data which has been driven by nmap. the "scan" object has a list of sub data under "tcp" each labled according to the ports opened. once the data is extracted the data is sent to openai API davenci model via a prompt. the prompt specifically asks for an JSON output and the data also to be used in a certain manner.
The entire structure of request that has to be sent to the openai API is designed in the completion section of the Program.
def profile(ip):
nm.scan('{}'.format(ip), arguments='-Pn -sS -sU -T4 -A -PE -PP -PS80,443 -PA3389 -PU40125 -PY -g 53 --script=vuln')
json_data = nm.analyse_nmap_xml_scan()
analize = json_data["scan"]
# Prompt about what the quary is all about
prompt = "do a vulnerability analysis of {} and return a vulnerabilty report in json".format(analize)
# A structure for the request
completion = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
)
response = completion.choices[0].text
return response