FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayMcAfee Blogs

Four Ways To Use AI Responsibly

Are you skeptical about mainstream artificial intelligence? Or are you all in on AI and use it all day, every day?  

The emergence of AI in daily life is streamlining workdays, homework assignments, and for some, personal correspondences. To live in a time where we can access this amazing technology from the smartphones in our pockets is a privilege; however, overusing AI or using it irresponsibly could cause a chain reaction that not only affects you but your close circle and society beyond. 

Here are four tips to help you navigate and use AI responsibly. 

1. Always Double Check AI’s Work

Artificial intelligence certainly earns the “intelligence” part of its name, but that doesn’t mean it never makes mistakes. Make sure to proofread or review everything AI creates, be it written, visual, or audio content.  

For instance, if you’re seeking a realistic image or video, AI often adds extra fingers and distorts faces. Some of its creations can be downright nightmarish! Also, there’s a phenomenon known as an AI hallucination. This occurs when the AI doesn’t admit that it doesn’t know the answer to your question. Instead, it makes up information that is untrue and even fabricates fake sources to back up its claims. 

One AI hallucination landed a lawyer in big trouble in New York. The lawyer used ChatGPT to write a brief, but he didn’t double check the AI’s work. It turns out the majority of the brief was incorrect.1 

Whether you’re a blogger with thousands of readers or you ask AI to write a little blurb to share amongst your friends or coworkers, it is imperative to edit everything that an AI tool generates. Not doing so could start a rumor based on a completely false claim. 

2. Be Transparent

If you use AI to do more than gather a few rough ideas, you should cite the tool you used as a source. Passing off an AI’s work as your own could be considered cheating in the eyes of teachers, bosses, or critics.  

There’s a lot of debate about whether AI has a place in the art world. One artist entered an image to a photography contest that he secretly created with AI. When his submission won the contest, the photographer revealed AI’s role in the image and gave up his prize. The photographer intentionally kept AI out of the conversation to prove a point, but imagine if he kept the image’s origin to himself.2 Would that be fair? When other photographers had to wait for the perfect angle of sunlight or catch a fleeting moment in time, should an AI-generated image with manufactured lighting and static subjects be judged the same way? 

3. Share Thoughtfully

Even if you don’t personally use AI, you’re still likely to encounter it daily, whether you realize it or not. AI-generated content is popular on social media, like the deepfake video game battles between politicians.3 (A deepfake is a manipulation of a photo, video, or audio clip that depicts something that never happened.) The absurdity of this video series is likely to tip off the viewer to its playful intent, though it’s best practice to add a disclaimer to any deepfake. 

Some deepfake have a malicious intent on top of looking and sounding very realistic. Especially around election time, fake news reports are likely to swirl and discredit the candidates. A great rule of thumb is: If it seems too fantastical to be true, it likely isn’t. Sometimes all it takes is five minutes to guarantee the authenticity of a social media post, photo, video, or news report. Think critically about the authenticity of the report before sharing. Fake news reports spread quickly, and many are incendiary in nature. 

4. Opt for Authenticity

According to “McAfee’s Modern Love Research Report,” 26% of respondents said they would use AI to write a love note; however, 49% of people said that they’d feel hurt if their partner tasked a machine with writing a love note instead of writing one with their own human heart and soul. 

Today’s AI is not sentient. That means that even if the final output moved you to tears or to laugh out loud, the AI itself doesn’t truly understand the emotions behind what it creates. It’s simply using patterns to craft a reply to your prompt. Hiding or funneling your true feelings into a computer program could result in a shaky and secretive relationship. 

Plus, if everyone relied upon AI content generation tools like ChatGPT, Bard, and Copy.ai, then how can we trust any genuine display of emotion? What would the future of novels, poetry, and even Hollywood look like?  

Be Cautious Yet Confident 

Responsible AI is a term that governs the responsibilities programmers have to society to ensure they populate AI systems with bias-free and accurate data. OpenAI (the organization behind ChatGPT and DALL-E) vows to act in “the best interests of humanity.”4 From there, the everyday people who interact with AI must similarly act in the best interests of themselves and those around them to avoid unleashing the dangers of AI upon society.   

The capabilities of AI are vast, and the technology is getting more sophisticated by the day. To ensure that the human voice and creative spirit doesn’t permanently take on a robotic feel, it’s best to use AI in moderation and be open with others about how you use it. 

To give you additional peace of mind, McAfee+ can restore your online privacy and identity should you fall into an AI-assisted scam. With identity restoration experts and up to $2 million in identity theft coverage, you can feel better about navigating this new dimension in the online world.   

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2ARTnews, “Artist Wins Photography Contest After Submitting AI-Generated Image, Then Forfeits Prize 

3Business Insider, “AI-generated audio of Joe Biden and Donald Trump trashtalking while gaming is taking over TikTok”   

4OpenAI, “OpenAI Charter 

The post Four Ways To Use AI Responsibly appeared first on McAfee Blog.

10 Artificial Intelligence Buzzwords You Should Know

Artificial intelligence used to be reserved for the population’s most brilliant scientists and isolated in the world’s top laboratories. Now, AI is available to anyone with an internet connection. Tools like ChatGPT, Voice.ai, DALL-E, and others have brought AI into daily life, but sometimes the terms used to describe their capabilities and inner workings are anything but mainstream. 

Here are 10 common terms you’ll likely to hear in the same sentence as your favorite AI tool, on the nightly news, or by the water cooler. Keep this AI dictionary handy to stay informed about this popular (and sometimes controversial) topic. 

AI-generated Content 

AI-generated content is any piece of written, audio, or visual media that was created partially or completely by an artificial intelligence-powered tool. 

If someone uses AI to create something, it doesn’t automatically mean they cheated or irresponsibly cut corners. AI is often a great place to start when creating outlines, compiling thought-starters, or seeking a new way of looking at a problem.  

AI Hallucination 

When your question stumps an AI, it doesn’t always admit that it doesn’t know the answer. So, instead of not giving an answer, it’ll make one up that it thinks you want to hear. This made-up answer is known as an AI hallucination. 

One real-world case of a costly AI hallucination occurred in New York where a lawyer used ChatGPT to write a brief. The brief seemed complete and cited its sources, but it turns out that none of the sources existed.1 It was all a figment of the AI’s “imagination.”  

Black Box 

To understand the term black box, imagine the AI as a system of cogs, pulleys, and conveyer belts housed within a box. In a see-through box, you can see how the input is transformed into the final product; however, some AI are referred to as a black box. That means you don’t know how the AI arrived at its conclusions. The AI completely hides its reasoning process. A black box can be a problem if you’d like to doublecheck the AI’s work. 

Deepfake 

Deepfake is the manipulation of a photo, video, or audio clip to portray events that never happened. Often used for humorous social media skits and viral posts, unsavory characters are also leveraging deepfake to spread fake news reports or scam people.  

For example, people are inserting politicians into unflattering poses and photo backgrounds. Sometimes the deepfake is intended to get a laugh, but other times the deepfake creator intends to spark rumors that could lead to dissent or tarnish the reputation of the photo subject. One tip to spot a deepfake image is to look at the hands and faces of people in the background. Deepfakes often add or subtract fingers or distort facial expressions. 

AI-assisted audio impersonations – which are considered deepfakes – are also rising in believability. According to McAfee’s “Beware the Artificial Imposter” report, 25% of respondents globally said that a voice scam happened either to themselves or to someone they know. Seventy-seven percent of people who were targeted by a voice scam lost money as a result.  

Deep Learning 

The closer an AI’s thinking process is to the human brain, the more accurate the AI is likely to be. Deep learning involves training an AI to reason and recall information like a human, meaning that the machine can identify patterns and make predictions. 

Explainable AI 

Explainable AI – or white box – is the opposite of black box AI. An explainable AI model always shows its work and how it arrived at its conclusion. Explainable AI can boost your confidence in the final output because you can doublecheck what went into the answer. 

Generative AI 

Generative AI is the type of artificial intelligence that powers many of today’s mainstream AI tools, like ChatGPT, Bard, and Craiyon. Like a sponge, generative AI soaks up huge amounts of data and recalls it to inform every answer it creates. 

Machine Learning 

Machine learning is integral to AI, because it lets the AI learn and continually improve. Without explicit instructions to do so, machine learning within AI allows the AI to get smarter the more it’s used. 

Responsible AI 

People must not only use AI responsibly, but the people designing and programming AI must do so responsibly, too. Technologists must ensure that the data the AI depends on is accurate and free from bias. This diligence is necessary to confirm that the AI’s output is correct and without prejudice.  

Sentient 

Sentient is an adjective that means someone or some thing is aware of feelings, sensations, and emotions. In futuristic movies depicting AI, the characters’ world goes off the rails when the robots become sentient, or when they “feel” human-like emotions. While it makes for great Hollywood drama, today’s AI is not sentient. It doesn’t empathize or understand the true meanings of happiness, excitement, sadness, or fear. 

So, even if an AI composed a short story that is so beautiful it made you cry, the AI doesn’t know that what it created was touching. It was just fulfilling a prompt and used a pattern to determine which word to choose next.  

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

The post 10 Artificial Intelligence Buzzwords You Should Know appeared first on McAfee Blog.

What Is Generative AI and How Does It Work?

It’s all anyone can talk about. In classrooms, boardrooms, on the nightly news, and around the dinner table, artificial intelligence (AI) is dominating conversations. With the passion everyone is debating, celebrating, and villainizing AI, you’d think it was a completely new technology; however, AI has been around in various forms for decades. Only now is it accessible to everyday people like you and me. 

The most famous of these mainstream AI tools are ChatGPT, DALL-E, and Bard, among others. The specific technology that links these tools is called generative artificial intelligence. Sometimes shortened to gen AI, you’re likely to have heard this term in the same sentence as deepfake, AI art, and ChatGPT. But how does the technology work? 

Here’s a simple explanation of how generative AI powers many of today’s famous (or infamous) AI tools. 

What Is Generative AI? 

Generative AI is the specific type of artificial intelligence that powers many of the AI tools available today in the pockets of the public. The “G” in ChatGPT stands for generative. Today’s Gen AI’s evolved from the use of chatbots in the 1960s. Now, as AI and related technologies like deep learning and machine learning have evolved, generative AI can answer prompts and create text, art, videos, and even simulate convincing human voices.  

How Does Generative AI Work? 

Think of generative AI as a sponge that desperately wants to delight the users who ask it questions. 

First, a gen AI model begins with a massive information deposit. Gen AI can soak up huge amounts of data. For instance, ChatGPT is trained on 300 billion words and hundreds of megabytes worth of facts. The AI will remember every piece of information that is fed into it. Additionally, it will use those nuggets of knowledge to inform any answer it spits out.  

From there, a generative adversarial network (GAN) algorithm constantly competes with itself within the gen AI model. This means that the AI will try to outdo itself to produce an answer it believes is the most accurate. The more information and queries it answers, the “smarter” the AI becomes. 

Google’s content generation tool, Bard is a great way to illustrate generative AI in action. Bard is based on gen AI and large language models. It’s trained in all types of literature and when asked to write a short story, it does so by finding language patterns and composing by choosing words that most often follow the one preceding it. In a 60 Minutes segment, Bard composed an eloquent short story that nearly brought the presenter to tears, but its composition was an exercise in patterns, not a display of understanding human emotions. So, while the technology is certainly smart, it’s not exactly creative. 

How to Use Generative AI Responsibly 

The major debates surrounding generative AI usually deal with how to use gen AI-powered tools for good. For instance, ChatGPT can be an excellent outlining partner if you’re writing an essay or completing a task at work; however, it’s irresponsible and is considered cheating if a student or an employee submits ChatGPT-written content word for word as their own work. If you do decide to use ChatGPT, it’s best to be transparent that it helped you with your assignment. Cite it as a source and make sure to double-check your work!  

One lawyer got in serious trouble when he trusted ChatGPT to write an entire brief and then didn’t take the time to edit its output. It turns out that much of the content was incorrect and cited sources that didn’t exist. This is a phenomenon known as an AI hallucination, meaning the program fabricated a response instead of admitting that it didn’t know the answer to the prompt.  

Deepfake and voice simulation technology supported by generative AI are other applications that people must use responsibly and with transparency. Deepfake and AI voices are gaining popularity in viral videos and on social media. Posters use the technology in funny skits poking fun at celebrities, politicians, and other public figures. However, to avoid confusing the public and possibly spurring fake news reports, these comedians have a responsibility to add a disclaimer that the real person was not involved in the skit. Fake news reports can spread with the speed and ferocity of wildfire.   

The widespread use of generative AI doesn’t necessarily mean the internet is a less authentic or a riskier place. It just means that people must use sound judgment and hone their radar for identifying malicious AI-generated content. Generative AI is an incredible technology. When used responsibly, it can add great color, humor, or a different perspective to written, visual, and audio content. 

Technology can also help protect against voice cloning attacks. Tools like McAfee Deepfake Detector, aim to detect AI-generated deepfakes, including audio-based clones. Stay informed about advancements in security technology and consider utilizing such tools to bolster your defenses.

The post What Is Generative AI and How Does It Work? appeared first on McAfee Blog.

The Future of Technology: AI, Deepfake, & Connected Devices

The dystopian 2020s, ’30s, and ’40s depicted in novels and movies written and produced decades ago blessedly seem very far off from the timeline of reality. Yes, we have refrigerators that suggest grocery lists, and we have devices in our pockets that control seemingly every function of our homes. But there aren’t giant robots roaming the streets bellowing orders.  

Humans are still very much in control of society. To keep it that way, we must use the latest technological advancements for their main intended purpose: To boost convenience in our busy lives. 

The future of technology is bright. With the right attitude, security tools, and a reminder every now and then to look up from our devices, humans will be able to enjoy everything the future of technology holds. 

Artificial Intelligence 

A look into the future of technology would be incomplete without touching on the applications and impact of artificial intelligence (AI) on everyday tasks. Platforms like ChatGPT , Voice.ai, and Craiyon have thrilled, shocked, and unnerved the world in equal measures. AI has already transformed work life, home life, and free time of everyday people everywhere.  

According to McAfee’s Modern Love Research Report, 26% of people would use AI to aid in composing a love note. Plus, more than two-thirds of those surveyed couldn’t tell the difference between a love note written by AI and a human. AI can be a good tool to generate ideas, but replacing genuine human emotion with the words of a computer program could create a shaky foundation for a relationship. 

The Center for AI Safety urges that humans must take an active role in using AI responsibly. Cybercriminals and unsavory online characters are already using it maliciously to gain financially and spread incendiary misinformation. For example, AI-generated voice imposters are scamming concerned family members and friends with heartfelt pleas for financial help with a voice that sounds just like their loved one. Voice scams are turning out to be fruitful for scammers: 77% of people polled who received a cloned voice scam lost money as a result. 

Even people who aren’t intending mischief can cause a considerable amount when they use AI to cut corners. One lawyer’s testimony went awry when his research partner, ChatGPT, when rogue and completely made up its justification.1 This phenomenon is known as an AI hallucination. It occurs when ChatGPT or other similar AI content generation tool doesn’t know the answer to your question, so it fabricates sources and asserts you that it’s giving you the truth.  

Overreliance on ChatGPT’s output and immediately trusting it as truth can lead to an internet rampant with fake news and false accounts. Keep in mind that using ChatGPT introduces risk in the content creation process. Use it responsibly. 

Deepfake 

Though it’s powered by AI and could fall under the AI section above, deepfake is exploding and deserves its own spotlight. Deepfake technology is the manipulation of videos to digitally transform one person’s appearance resemble someone else, usually a public figure. Deepfake videos are often accompanied by AI-altered voice tracks. Deepfake challenges the accuracy of the common saying, “Seeing is believing.” Now, it’s more difficult than ever to separate fact from fiction.   

Not all deepfake uses are nefarious. Deepfake could become a valuable tool in special effects and editing for the gaming and film industries. Additionally, fugitive sketch artists could leverage deepfake to create ultra-realistic portraits of wanted criminals. If you decide to use deepfake to add some flair to your social media feed or portfolio, make sure to add a disclaimer that you altered reality with the technology. 

Connected Devices 

Currently, it’s estimated that there are more than 15 billion connected devices in the world. A connected device is defined as anything that connects to the internet. In addition to smartphones, computers, and tablets, connected devices also extend to smart refrigerators, smart lightbulbs, smart TVs, virtual home assistants, smart thermostats, etc. By 2030, there may be as many as 29 billion connected devices.2 

The growing number of connected devices can be attributed to our desire for convenience. The ability to remote start your car on a frigid morning from the comfort of your home would’ve been a dream in the 1990s. Checking your refrigerator’s contents from the grocery store eliminates the need for a pesky second trip to pick up the items you forgot the first time around. 

The downfall of so many connected devices is that it presents crybercriminals literally billions of opportunities to steal people’s personally identifiable information. Each device is a window into your online life, so it’s essential to guard each device to keep cybercriminals away from your important personal details. 

What the Future of Technology Holds for You 

With the widespread adoption of email, then cellphones, and then social media in the ’80s, ’90s and early 2000s, respectively, people have integrated technology into their daily lives that better helps them connect with other people. More recent technological innovations seem to trend toward how to connect people to their other devices for a seamless digital life. 

We shouldn’t ignore that the more devices and online accounts we manage, the more opportunities cybercriminals have to weasel their way into your digital life and put your personally identifiable information at risk. To protect your online privacy, devices, and identity, entrust your digital safety to McAfee+. McAfee+ includes $1 million in identity theft coverage, virtual private network (VPN), Personal Data Cleanup, and more. 

The future isn’t a scary place. It’s a place of infinite technological possibilities! Explore them confidently with McAfee+ by your side. 

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2Statista, “Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030 

The post The Future of Technology: AI, Deepfake, & Connected Devices appeared first on McAfee Blog.

Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You

Anyone can try ChatGPT for free. Yet that hasn’t stopped scammers from trying to cash in on it.  

A rash of sketchy apps have cropped up in Apple’s App Store and Google Play. They pose as Chat GPT apps and try to fleece smartphone owners with phony subscriptions.  

Yet you can spot them quickly when you know what to look for. 

What is ChatGPT, and what are people doing with it? 

ChatGPT is an AI-driven chatbot service created by OpenAI. It lets you have uncannily human conversations with an AI that’s been programmed and fed with information over several generations of development. Provide it with an instruction or ask it a question, and the AI provides a detailed response. 

Unsurprisingly, it has millions of people clamoring to use it. All it takes is a single prompt, and the prompts range far and wide.  

People ask ChatGPT to help them write cover letters for job interviews, make travel recommendations, and explain complex scientific topics in plain language. One person highlighted how they used ChatGPT to run a tabletop game of Dungeons & Dragons for them. (If you’ve ever played, you know that’s a complex task that calls for a fair share of cleverness to keep the game entertaining.)  

That’s just a handful of examples. As for myself, I’ve been using ChatGPT in the kitchen. My family and I have been digging into all kinds of new recipes thanks to its AI. 

Sketchy ChatGPT apps in the App Store and Google Play 

So, where do the scammers come in? 

Scammers, have recently started posting copycat apps that look like they are powered by ChatGPT but aren’t. What’s more, they charge people a fee to use them—a prime example of fleeceware. OpenAI, the makers of ChatGPT, have just officially launched their iOS app for U.S. iPhone users and can be downloaded from the Apple App Store here. The official Android version is still yet to be released.  

Fleeceware mimics a pre-existing service that’s free or low-cost and then charges an excessive fee to use it. Basically, it’s a copycat. An expensive one at that.  

Fleeceware scammers often lure in their victims with “a free trial” that quickly converts into a subscription. However, with fleeceware, the terms of the subscription are steep. They might bill the user weekly, and at rates much higher than the going rate. 

The result is that the fleeceware app might cost the victim a few bucks before they can cancel it. Worse yet, the victim might forget about the app entirely and run up hundreds of dollars before they realize what’s happening. Again, all for a simple app that’s free or practically free elsewhere. 

What makes fleeceware so tricky to spot is that it can look legit at first glance. Plenty of smartphone apps offer subscriptions and other in-app purchases. In effect, fleeceware hides in plain sight among the thousands of other legitimate apps in the hopes you’ll download it. 

With that, any app that charges a fee to use ChatGPT is fleeceware. ChatGPT offers basic functionality that anyone can use for free.  

There is one case where you might pay a fee to use ChatGPT. It has its own subscription-level offering, ChatGPT Plus. With a subscription, ChatGPT responds more quickly to prompts and offers access during peak hours when free users might be shut out. That’s the one legitimate case where you might pay to use it. 

In all, more and more people want to take ChatGPT for a spin. However, they might not realize it’s free. Scammers bank on that, and so we’ve seen a glut of phony ChatGPT apps that aim to install fleeceware onto people’s phones. 

How do you keep fleeceware and other bad apps off your phone?  

Read the fine print. 

Read the description of the app and see what the developer is really offering. If the app charges you to use ChatGPT, it’s fleeceware. Anyone can use ChatGPT for free by setting up an account at its official website, https://chat.openai.com. 

Look at the reviews. 

Reviews can tell you quite a bit about an app. They can also tell you the company that created it handles customer feedback.  

In the case of fleeceware, you’ll likely see reviews that complain about sketchy payment terms. They might mention three-day trials that automatically convert to pricey monthly or weekly subscriptions. Moreover, they might describe how payment terms have changed and become more costly as a result.  

In the case of legitimate apps, billing issues can arise from time to time, so see how the company handles complaints. Companies in good standing will typically provide links to customer service where people can resolve any issues they have. Company responses that are vague, or a lack of responses at all, should raise a red flag. 

Be skeptical about overwhelmingly positive reviews. 

Scammers are smart. They’ll count on you to look at an overall good review of 4/5 stars or more and think that’s good enough. They know this, so they’ll pack their app landing page with dozens and dozens of phony and fawning reviews to make the app look legitimate. This tactic serves another purpose: it hides the true reviews written by actual users, which might be negative because the app is a scam. 

Filter the app’s reviews for the one-star reviews and see what concerns people have. Do they mention overly aggressive billing practices, like the wickedly high prices and weekly billing cycles mentioned above? That might be a sign of fleeceware. Again, see if the app developer responded to the concerns and note the quality of the response. A legitimate company will honestly want to help a frustrated user and provide clear next steps to resolve the issue. 

Steer clear of third-party app stores. 

Google Play does its part to keep its virtual shelves free of malware-laden apps with a thorough submission process, as reported by Google. It further keeps things safer through its App Defense Alliance that shares intelligence across a network of partners, of which we’re a proud member. Further, users also have the option of running Play Protect to check apps for safety before they’re downloaded. Apple’s App Store has its own rigorous submission process for submitting apps. Likewise, Apple deletes hundreds of thousands of malicious apps from its store each year. 

Third-party app stores might not have protections like these in place. Moreover, some of them might be fronts for illegal activity. Organized cybercrime organizations deliberately populate their third-party stores with apps that steal funds or personal information. Stick with the official app stores for the most complete protection possible.  

Cancel unwanted subscriptions from your phone. 

Many fleeceware apps deliberately make it tough to cancel them. You’ll often see complaints about that in reviews, “I don’t see where I can cancel my subscription!” Deleting the app from your phone is not enough. Your subscription will remain active unless you cancel your payment method.  

Luckily, your phone makes it easy to cancel subscriptions right from your settings menu. Canceling makes sure your credit or debit card won’t get charged when the next billing cycle comes up. 

Be wary. Many fleeceware apps have aggressive billing cycles. Sometimes weekly.  

The safest and best way to enjoy ChatGPT: Go directly to the source. 

ChatGPT is free. Anyone can use it by setting up a free account with OpenAI at https://chat.openai.com. Smartphone apps that charge you to use it are a scam. 

How to download the official ChatGPT app 

You can download the official app, currently on iOS from the App Store 

The post Anyone Can Try ChatGPT for Free—Don’t Fall for Sketchy Apps That Charge You appeared first on McAfee Blog.

❌