Meta has unleashed a groundbreaking feature that transforms Instagram from a photo-sharing platform into a real-time location broadcaster. While the company promises enhanced connectivity, cybersecurity experts are sounding alarm bells about potential dangers lurking beneath this seemingly innocent update.
Instagram’s freshly minted “Map” functionality represents a seismic shift in social media architecture. Unlike traditional posting where you deliberately choose what to share, this feature operates as an always-on location transmitter that continuously broadcasts your whereabouts to selected contacts whenever you launch the application.
The mechanism mirrors Snapchat’s infamous Snap Map, but with Instagram’s massive user base—over 2 billion active accounts—the implications for personal security amplify exponentially. This feature enables users to share their real-time location with friends and view theirs on a live map, but it also raises serious privacy concerns from targeted advertising to potential stalking and misuse in abusive relationships.
McAfee’s Chief Technology Officer Steve Grobman provides crucial context: “Features like location sharing aren’t inherently bad, but they come with tradeoffs. It’s about making informed choices. When people don’t fully understand what’s being shared or who can see it, that’s when it becomes a risk.”
Digital predators can exploit location data to track victims with unprecedented precision. Relationship and parenting experts warn location sharing can turn into a stressful or even dangerous form of control, with research showing that 19 percent of 18 to 24-year-olds think it’s reasonable to expect to track an intimate partner’s location.
Steve Grobman emphasizes the real-world implications: “There’s also a real-world safety concern. If someone knows where you are in real time, that could lead to stalking, harassment, or even assault. Location data can be powerful, and in the wrong hands, dangerous.”
Your boss, colleagues, or acquaintances might gain unwanted insights into your personal activities. Imagine explaining why you visited a competitor’s office or why you called in sick while appearing at a shopping center.
The danger often comes from within your own network. Grobman warns: “It only takes one person with bad intentions for location sharing to become a serious problem. You may think your network is made up of friends, but in many cases, people accept requests from strangers or someone impersonating a contact without really thinking about the consequences.”
While Instagram claims it doesn’t use location data from this feature for ad targeting, the platform’s history with user data suggests caution. Your movement patterns create valuable behavioral profiles for marketers.
Cybercriminals employ sophisticated data aggregation techniques. According to Grobman: “Criminals can use what’s known as the mosaic effect, combining small bits of data like your location, routines, and social posts to build a detailed profile. They can use that information to run scams against a consumer or their connections, guess security questions, or even commit identity theft.”
For iPhone Users:
For Android Users:
Method 1: Through the Map Interface
Method 2: Through Profile Settings
iPhone Security Configuration:
Android Security Setup:
After implementing these changes:
Audit Your Digital Footprint
Review all social media platforms for similar location-sharing features. Snapchat, Facebook, and TikTok offer comparable functionalities that require individual deactivation.
Implement Location Spoofing Awareness
Some users consider VPN services or location-spoofing applications, but these methods can violate platform terms of service and create additional security vulnerabilities.
Regular Security Hygiene
Establish monthly reviews of your privacy settings across all social platforms. Companies frequently update features and reset user preferences without explicit notification.
Grobman emphasizes the challenge consumers face: “Most social platforms offer privacy settings that offer fine-grained control, but the reality is many people don’t know those settings exist or don’t take the time to use them. That can lead to oversharing, especially when it comes to things like your location.”
Family Protection Protocols
If you’re a parent with supervision set up for your teen, you can control their location sharing experience on the map, get notified when they enable it, and see who they’re sharing with. Implement these controls immediately for underage family members.
Data Collection Frequency
Your location updates whenever you open the app or return to it while running in the background. This means Instagram potentially logs your position multiple times daily, creating detailed movement profiles.
Data Retention Policies
Instagram claims to hold location data for a maximum of three days, but this timeframe applies only to active sharing, not the underlying location logs the platform maintains for other purposes.
Visibility Scope
Even with location sharing disabled, you can still see others’ shared locations on the map if they’ve enabled the feature. This asymmetric visibility creates potential social pressure to reciprocate sharing.
Red Flags and Warning Signs
Monitor these indicators that suggest your privacy may be compromised:
This Instagram update represents a concerning trend toward ambient surveillance in social media. Companies increasingly normalize continuous data collection by framing it as connectivity enhancement. As consumers, we must recognize that convenience often comes at the cost of privacy.
The feature’s opt-in design provides some protection, but user reports suggest the system may automatically activate for users with older app versions who previously granted location permissions. This highlights the importance of proactive privacy management rather than reactive protection.
Immediate (Next 10 Minutes):
This Week:
Monthly Ongoing:
Grobman advises a comprehensive approach: “The best thing you can do is stay aware and take control. Review your app permissions, think carefully before you share, and use tools that help protect your privacy. McAfee+ includes identity monitoring, scam detection. McAfee’s VPN keeps your IP address private, but if a consumer allows an application to identify its location via GPS or other location services, VPNs will not protect location in that scenario. Staying safe online is always a combination of the best technology along with good digital street smarts.”
Remember: Your location data tells the story of your life—where you work, live, worship, shop, and spend leisure time. Protecting this information isn’t paranoia; it’s fundamental digital hygiene in our hyper-connected world.
The choice to share your location should always remain yours, made with full awareness of the implications. By implementing these protective measures, you’re taking control of your digital footprint and safeguarding your personal security in an increasingly surveilled digital landscape.
The post Instagram’s New Tracking Feature: What You Need to Know to Stay Safe appeared first on McAfee Blog.
The UK’s digital landscape underwent its most significant transformation yet on Friday, July 25, 2025. The Online Safety Act 2023, seven years in the making, is now being fully enforced by Ofcom (the UK’s communications regulator). These new rules fundamentally change how British citizens access and interact with online content, with the primary goal of protecting children from harmful material.
The Online Safety Act is comprehensive legislation designed to make the UK “the safest place in the world to be online.” The law places legal responsibilities on social media companies, search engines, and other online platforms to protect users—especially children—from illegal and harmful content.
The Act applies to virtually any online service that allows user interaction or content sharing, including social media platforms, messaging apps, search engines, gaming platforms, dating apps, and even smaller forums or comment sections.
The journey to the UK Online Safety Act was a long and complex one, beginning with the Government’s 2019 Online Harms White Paper. This initial proposal outlined the need for a new regulatory framework to tackle harmful content. The draft Online Safety Bill was published in May 2021, sparking years of intense debate and scrutiny in Parliament. Public pressure, significantly amplified by tragic events and tireless campaigning from organizations like the Molly Rose Foundation, played a crucial role in shaping the legislation and accelerating its passage. After numerous amendments and consultations with tech companies, civil society groups, and child safety experts, the bill finally received Royal Assent on October 26, 2023, officially becoming the Online Safety Act.
This new UK internet law applies to a vast range of online services accessible within the UK. The core focus is on platforms that host user-generated content (known as user-to-user services) and search engines. Ofcom, the regulator, has established a tiered system to apply the rules proportionally. Category 1 services are the largest and highest-risk platforms like Meta (Facebook, Instagram), X (formerly Twitter), and Google, which face the most stringent requirements. Category 2A covers search services, and Category 2B includes all other in-scope services that don’t meet the Category 1 threshold. This includes smaller social media sites, online forums, and commercial pornographic websites. Notably, services like email, SMS, and content on recognized news publisher websites are exempt from these specific regulations.
Mandatory Age Verification for Adult Content
The most immediate change for consumers is the replacement of simple “Are you 18?” checkboxes with robust age verification. As Oliver Griffiths from Ofcom explained: “The situation at the moment is often ridiculous because people just have to self-declare what their birthday is. That’s no check at all.”
There are three main ways that Brits will now be asked to prove their age:
Platforms must now actively prevent children from accessing content related to suicide, self-harm, eating disorders, pornography, violent or abusive material, online bullying, dangerous challenges or stunts, and hate speech.
Social media platforms and large search engines must keep harmful content off children’s feeds entirely, with algorithms that recommend content required to filter out dangerous material.
Online services must now provide clear and accessible reporting mechanisms for both children and parents, procedures for quickly taking down dangerous content, and identify a named person “accountable for children’s safety” with annual reviews of how they manage risks to children.
Ofcom’s enforcement will follow a proportionality principle, meaning the largest platforms with the highest reach and risk will face the most demanding obligations. Platforms are strongly advised to seek early legal and technical guidance to ensure they meet their specific duties under the new law.
The statistics that drove this legislation are shocking:
According to the Children’s Commissioner, half of 13-year-olds surveyed reported seeing “hardcore, misogynistic” pornographic material on social media sites, with material about suicide, self-harm, and eating disorders described as “prolific.”
Major websites like PornHub, X (formerly Twitter), Reddit, Discord, Bluesky, and Grindr have already committed to following the new rules. Over 6,000 websites hosting adult content have implemented age-assurance measures.
Reddit started checking ages last week for mature content using technology from Persona, which verifies age through uploaded selfies or government ID photos. X has implemented age estimation technology and ID checks, defaulting unverified users into sensitive content settings.
Many consumers worry about privacy implications of age verification, but the system has built-in protections:
Companies face serious penalties for non-compliance: fines of up to £18 million or 10% of global revenue (whichever is higher). For a company like Meta, this could mean a £16 billion fine.
In extreme cases, senior managers at tech companies face criminal liability and up to two years in jail for repeated breaches. Ofcom can also apply for court orders to block services from being available in the UK.
Ofcom has already launched probes into 11 companies suspected of breaching parts of the Online Safety Act and expects to announce new investigations into platforms that fail to comply with age check requirements.
While some might consider using VPNs to bypass age verification, Ofcom acknowledges this limitation but emphasizes that most exposure isn’t from children actively seeking harmful content: “Our research shows that these are not people that are out to find porn — it’s being served up to them in their feeds.”
As Griffiths explained: “There will be dedicated teenagers who want to find their way to porn, in the same way as people find ways to buy alcohol under 18. They will use VPNs. And actually, I think there’s a really important reflection here… Parents having a view in terms of whether their kids have got a VPN, and using parental controls and having conversations, feels a really important part of the solution.”
You now have stronger tools and clearer accountability from platforms. Two-thirds of parents already use controls to limit what their children see online, and the new rules provide additional safeguards, though about one in five children can still disable parental controls.
You may experience “some friction” when accessing adult material, but the changes vary by platform. On many services, users will see no obvious difference at all, as only platforms which permit harmful content and lack safeguards are required to introduce checks.
Stricter age controls mean more restricted access to certain content, but platforms must also provide better safety tools and clearer reporting mechanisms.
Industry experts and regulators emphasize that this is “the start of a journey” rather than an overnight fix. As one tech lawyer noted: “I don’t think we’re going to wake up on Friday and children are magically protected… What I’m hoping is that this is the start of a journey towards keeping children safe.”
Ofcom’s approach will be iterative, with ongoing adjustments and improvements. The regulator has indicated it will take swift action against platforms that deliberately flout rules but will work constructively with those genuinely seeking compliance.
The UK Online Safety Act is set to have a profound impact, bringing both significant benefits and notable challenges. For users, the primary benefit is a safer online environment, especially for children who will be better shielded from harmful content. Increased transparency from platforms will also empower users with more information about the risks on services they use. However, some users have raised concerns about data privacy related to age verification and the potential for the Act to stifle free expression and lead to over-removal of legitimate content.
For the tech industry, the law presents major operational hurdles. Compliance will require substantial investment in technology, content moderation, and legal expertise, with costs potentially running into the billions across the sector. Smaller platforms may struggle to meet the requirements, potentially hindering innovation and competition. The key takeaway is that the Online Safety Act marks a paradigm shift, moving from self-regulation to a legally enforceable duty of care, the full effects of which will unfold over the coming years as Ofcom’s enforcement ramps up.
Some campaigners argue the measures don’t go far enough, with the Molly Rose Foundation calling for additional changes and some MPs wanting under-16s banned from social media completely. Privacy advocates worry about invasive verification methods, while others question effectiveness.
Parliament’s Science, Innovation and Technology Committee has criticized the act for containing “major holes,” particularly around misinformation and AI-generated content. Technology Secretary Peter Kyle has promised to “shortly” announce additional measures to reduce children’s screen time.
This week’s implementation represents “the most significant milestone yet” in the UK’s bid to become the safest place online. While the changes may not be immediately visible to all users, they establish crucial foundations for ongoing child safety improvements.
The Online Safety Act is designed to be a living framework that evolves with technology and emerging threats. Expect continued refinements, additional measures, and stronger enforcement as the system matures.
The Online Safety Act represents a fundamental shift in how online platforms operate in the UK. While it may introduce some inconvenience through age verification processes, the legislation prioritizes protecting children from genuine harm.
The success of these measures will depend on consistent enforcement, platform cooperation, and ongoing parental engagement. As one Ofcom official noted: “I think people accept that we’re not able to snap our fingers and do everything immediately when we are facing really deep-seated problems that have built up over 20 years. But what we are going to be seeing is really big progress.”
Stay informed about these changes, understand your verification options, and remember that these new safeguards are designed to protect the most vulnerable internet users while preserving legitimate access for adults.
The post UK’s New Online Safety Act: What Consumers Need to Know appeared first on McAfee Blog.
As reports emerge of a new TikTok app known internally as “M2” specifically designed for US users, McAfee warns that the transition period could create perfect conditions for cybercriminals to exploit unsuspecting consumers – including by distributing fake or malicious TikTok apps disguised as the real thing. Here’s what you need to know about the potential risks and how to stay protected.
According to reports from The Information, TikTok is reportedly building a new version of the app just for the United States that could launch as soon as September 5. This development comes as ByteDance faces pressure to sell TikTok’s US operations or face a ban under federal legislation. The existing TikTok app will be removed from US app stores on the same day the new US app launches, although Americans may be able to continue using the current app until March of next year.
The transition won’t be seamless. Transferring the profiles and content of current users to the new app could pose practical challenges, and such a move could also make it harder for American TikTok users to see content from users in other countries. This disruption period presents significant cybersecurity risks that users must be aware of.
ByteDance has been on the clock to find a new owner for TikTok’s US operations since then-President Joe Biden signed the sale-or-ban law last year over national security concerns. The Chinese government has indicated it would block any transfer of TikTok’s algorithm, meaning any new, separate American TikTok would need its own algorithm, possibly built from the ground up. President Trump has stated there are wealthy buyers ready to purchase TikTok’s US operations, though ByteDance currently has until September 17 to sell the app or face a US ban.
The announcement of a new TikTok app creates a perfect storm for cybercriminals looking to exploit confused users during the transition period. Based on McAfee’s recent research into Android malware campaigns, we can expect to see a surge in fake TikTok apps appearing across various distribution channels.
Drawing from our analysis of current malware trends, cybercriminals will likely leverage several tactics:
1. Timing Confusion: During the transition period when users are uncertain about which app is legitimate, scammers will capitalize on this confusion by distributing fake “new TikTok” apps through unofficial channels and app stores.
2. Sophisticated Impersonation: Cybercriminals are getting smarter, using development toolkits like .NET MAUI to create fake apps that look and feel like the real thing. Expect to see convincing fake TikTok apps that mirror the official design and functionality.
3. Advanced Evasion Techniques: These fake apps hide their code in binary files so it can’t be easily detected, letting them stay on your phone longer—stealing quietly in the background. The new TikTok transition provides perfect cover for such sophisticated malware.
These apps aren’t in the Google Play Store. Instead, hackers will likely share them on fake websites, messaging apps, and sketchy links in texts or chat groups. During the TikTok transition, be especially wary of:
Based on recent malware campaigns we’ve analyzed, fake TikTok apps could potentially:
To stay safe during this vulnerable period, follow these essential guidelines:
Hackers are getting creative, but you can stay one step ahead. These recent .NET MAUI-based threats are sneaky—but they’re not unstoppable. The key is maintaining vigilance and using comprehensive security tools that evolve with the threat landscape.
As we navigate the transition to a new TikTok app for US users, remember that cybercriminals will attempt to exploit every opportunity for confusion and uncertainty. By staying informed, using official download sources, and leveraging tools like McAfee’s Mobile Security, you can continue enjoying social media safely.
The digital landscape is constantly evolving, but with the right knowledge and tools, you can stay protected while enjoying the platforms you love. Whether you’re transitioning to a new TikTok app or simply want better control over your social media privacy, McAfee+ provides the comprehensive protection you need in today’s connected world.
The post New TikTok App on the Horizon: What US Users Need to Know About the Risks appeared first on McAfee Blog.
On Sunday, July 20, Microsoft Corp. issued an emergency security update for a vulnerability in SharePoint Server that is actively being exploited to compromise vulnerable organizations. The patch comes amid reports that malicious hackers have used the SharePoint flaw to breach U.S. federal and state agencies, universities, and energy companies.
Image: Shutterstock, by Ascannio.
In an advisory about the SharePoint security hole, a.k.a. CVE-2025-53770, Microsoft said it is aware of active attacks targeting on-premises SharePoint Server customers and exploiting vulnerabilities that were only partially addressed by the July 8, 2025 security update.
The Cybersecurity & Infrastructure Security Agency (CISA) concurred, saying CVE-2025-53770 is a variant on a flaw Microsoft patched earlier this month (CVE-2025-49706). Microsoft notes the weakness applies only to SharePoint Servers that organizations use in-house, and that SharePoint Online and Microsoft 365 are not affected.
The Washington Post reported on Sunday that the U.S. government and partners in Canada and Australia are investigating the hack of SharePoint servers, which provide a platform for sharing and managing documents. The Post reports at least two U.S. federal agencies have seen their servers breached via the SharePoint vulnerability.
According to CISA, attackers exploiting the newly-discovered flaw are retrofitting compromised servers with a backdoor dubbed “ToolShell” that provides unauthenticated, remote access to systems. CISA said ToolShell enables attackers to fully access SharePoint content — including file systems and internal configurations — and execute code over the network.
Researchers at Eye Security said they first spotted large-scale exploitation of the SharePoint flaw on July 18, 2025, and soon found dozens of separate servers compromised by the bug and infected with ToolShell. In a blog post, the researchers said the attacks sought to steal SharePoint server ASP.NET machine keys.
“These keys can be used to facilitate further attacks, even at a later date,” Eye Security warned. “It is critical that affected servers rotate SharePoint server ASP.NET machine keys and restart IIS on all SharePoint servers. Patching alone is not enough. We strongly advise defenders not to wait for a vendor fix before taking action. This threat is already operational and spreading rapidly.”
Microsoft’s advisory says the company has issued updates for SharePoint Server Subscription Edition and SharePoint Server 2019, but that it is still working on updates for supported versions of SharePoint 2019 and SharePoint 2016.
CISA advises vulnerable organizations to enable the anti-malware scan interface (AMSI) in SharePoint, to deploy Microsoft Defender AV on all SharePoint servers, and to disconnect affected products from the public-facing Internet until an official patch is available.
The security firm Rapid7 notes that Microsoft has described CVE-2025-53770 as related to a previous vulnerability — CVE-2025-49704, patched earlier this month — and that CVE-2025-49704 was part of an exploit chain demonstrated at the Pwn2Own hacking competition in May 2025. That exploit chain invoked a second SharePoint weakness — CVE-2025-49706 — which Microsoft unsuccessfully tried to fix in this month’s Patch Tuesday.
Microsoft also has issued a patch for a related SharePoint vulnerability — CVE-2025-53771; Microsoft says there are no signs of active attacks on CVE-2025-53771, and that the patch is to provide more robust protections than the update for CVE-2025-49706.
This is a rapidly developing story. Any updates will be noted with timestamps.
If someone called you claiming to be a government official, would you know if their voice was real? This question became frighteningly relevant this week when a cybercriminal used social engineering and AI to impersonate Secretary of State Marco Rubio, fooling high-level officials with fake voice messages that sounded exactly like him. It raises a critical concern: would other world leaders be able to tell the difference, or would they fall for it too?
In June 2025, an unknown attacker created a fake Signal account using the display name “Marco.Rubio@state.gov” and began contacting government officials with AI-generated voice messages that perfectly mimicked the Secretary of State’s voice and writing style. The imposter successfully reached at least five high-profile targets, including three foreign ministers, a U.S. governor, and a member of Congress.
The attack wasn’t just about pranks or publicity. U.S. authorities believe the culprit was “attempting to manipulate powerful government officials with the goal of gaining access to information or accounts.” This represents a sophisticated social engineering attack that could have serious national and international security implications.
The Rubio incident isn’t isolated. In May, someone breached the phone of White House Chief of Staff Susie Wiles and began placing calls and messages to senators, governors and business executives while pretending to be Wiles. These attacks are becoming more common because:
While the Rubio case involved government officials, these same techniques are being used against everyday Americans. A recent McAfee study found that 59% of Americans say they or someone they know has fallen for an online scam in the last 12 months, with scam victims losing an average of $1,471. In 2024, our research revealed that 1 in 3 people believe they have experienced some kind of AI voice scam
Some of the most devastating are “grandparent scams” where criminals clone a grandchild’s voice to trick elderly relatives into sending money for fake emergencies. Deepfake scam victims have reported losses ranging from $250 to over half a million dollars.
Common AI voice scam scenarios:
One big reason deepfake scams are exploding? The tools are cheap, powerful, and incredibly easy to use. McAfee Labs tested 17 deepfake generators and found many are available online for free or with low-cost trials. Some are marketed as “entertainment” — made for prank calls or spoofing celebrity voices on apps like WhatsApp. But others are clearly built with scams in mind, offering realistic impersonations with just a few clicks.
Not long ago, creating a convincing deepfake took experts days or even weeks. Now? It can cost less than a latte and take less time to make than it takes to drink one. Simple drag-and-drop interfaces mean anyone — even with zero technical skills – can clone voices or faces.
Even more concerning: open-source libraries provide free tutorials and pre-trained models, helping scammers skip the hard parts entirely. While some of the more advanced tools require a powerful computer and graphics card, a decent setup costs under $1,000, a tiny price tag when you consider the payoff.
Globally, 87% of scam victims lose money, and 1 in 5 lose over $1,000. Just a handful of successful scams can easily pay for a scammer’s gear and then some. In one McAfee test, for just $5 and 10 minutes of setup time, we created a real-time avatar that made us look and sound like Tom Cruise. Yes, it’s that easy — and that dangerous.
Figure 1. Demonstrating the creation of a highly convincing deepfake
Recognizing the urgent need for protection, McAfee developed Deepfake Detector to fight AI-powered scams. McAfee’s Deepfake Detector represents one of the most advanced consumer tools available today.
While McAfee’s Deepfake Detector is built to identify manipulated audio within videos, it points to the kind of technology that’s becoming essential in situations like this. If the impersonation attempt had taken the form of a video message posted or shared online, Deepfake Detector could have:
Our technology uses advanced AI detection techniques — including transformer-based deep neural networks — to help consumers discern what’s real from what’s fake in today’s era of AI-driven deception.
While the consumer-facing version of our technology doesn’t currently scan audio-only content like phone calls or voice messages, the Rubio case shows why AI detection tools like ours are more critical than ever — especially as threats evolve across video, audio, and beyond – and why it’s crucial for the cybersecurity industry to continue evolving at the speed of AI.
While technology like McAfee’s Deepfake Detector provides powerful protection, you should also:
The Rubio incident shows that no one is immune to AI voice scams. It also demonstrates why proactive detection technology is becoming essential. Knowledge is power, and this has never been truer than in today’s AI-driven world.
The race between AI-powered scams and AI-powered protection is intensifying. By staying informed, using advanced detection tools, and maintaining healthy skepticism, we can stay one step ahead of cybercriminals who are trying to literally steal our voices, and our trust.
The post When AI Voices Target World Leaders: The Growing Threat of AI Voice Scams appeared first on McAfee Blog.
Microsoft today released updates to fix at least 137 security vulnerabilities in its Windows operating systems and supported software. None of the weaknesses addressed this month are known to be actively exploited, but 14 of the flaws earned Microsoft’s most-dire “critical” rating, meaning they could be exploited to seize control over vulnerable Windows PCs with little or no help from users.
While not listed as critical, CVE-2025-49719 is a publicly disclosed information disclosure vulnerability, with all versions as far back as SQL Server 2016 receiving patches. Microsoft rates CVE-2025-49719 as less likely to be exploited, but the availability of proof-of-concept code for this flaw means its patch should probably be a priority for affected enterprises.
Mike Walters, co-founder of Action1, said CVE-2025-49719 can be exploited without authentication, and that many third-party applications depend on SQL server and the affected drivers — potentially introducing a supply-chain risk that extends beyond direct SQL Server users.
“The potential exposure of sensitive information makes this a high-priority concern for organizations handling valuable or regulated data,” Walters said. “The comprehensive nature of the affected versions, spanning multiple SQL Server releases from 2016 through 2022, indicates a fundamental issue in how SQL Server handles memory management and input validation.”
Adam Barnett at Rapid7 notes that today is the end of the road for SQL Server 2012, meaning there will be no future security patches even for critical vulnerabilities, even if you’re willing to pay Microsoft for the privilege.
Barnett also called attention to CVE-2025-47981, a vulnerability with a CVSS score of 9.8 (10 being the worst), a remote code execution bug in the way Windows servers and clients negotiate to discover mutually supported authentication mechanisms. This pre-authentication vulnerability affects any Windows client machine running Windows 10 1607 or above, and all current versions of Windows Server. Microsoft considers it more likely that attackers will exploit this flaw.
Microsoft also patched at least four critical, remote code execution flaws in Office (CVE-2025-49695, CVE-2025-49696, CVE-2025-49697, CVE-2025-49702). The first two are both rated by Microsoft as having a higher likelihood of exploitation, do not require user interaction, and can be triggered through the Preview Pane.
Two more high severity bugs include CVE-2025-49740 (CVSS 8.8) and CVE-2025-47178 (CVSS 8.0); the former is a weakness that could allow malicious files to bypass screening by Microsoft Defender SmartScreen, a built-in feature of Windows that tries to block untrusted downloads and malicious sites.
CVE-2025-47178 involves a remote code execution flaw in Microsoft Configuration Manager, an enterprise tool for managing, deploying, and securing computers, servers, and devices across a network. Ben Hopkins at Immersive said this bug requires very low privileges to exploit, and that it is possible for a user or attacker with a read-only access role to exploit it.
“Exploiting this vulnerability allows an attacker to execute arbitrary SQL queries as the privileged SMS service account in Microsoft Configuration Manager,” Hopkins said. “This access can be used to manipulate deployments, push malicious software or scripts to all managed devices, alter configurations, steal sensitive data, and potentially escalate to full operating system code execution across the enterprise, giving the attacker broad control over the entire IT environment.”
Separately, Adobe has released security updates for a broad range of software, including After Effects, Adobe Audition, Illustrator, FrameMaker, and ColdFusion.
The SANS Internet Storm Center has a breakdown of each individual patch, indexed by severity. If you’re responsible for administering a number of Windows systems, it may be worth keeping an eye on AskWoody for the lowdown on any potentially wonky updates (considering the large number of vulnerabilities and Windows components addressed this month).
If you’re a Windows home user, please consider backing up your data and/or drive before installing any patches, and drop a note in the comments if you encounter any problems with these updates.
Summer festival season is upon us, and music lovers are eagerly anticipating everything from The Weeknd tickets to intimate local music festivals. But while you’re dreaming of unforgettable performances, scammers are plotting to turn your concert and festival excitement into their profitable payday. The sobering reality? UK gig-goers lost over £1.6 million to ticket fraud in 2024 more than double the previous year’s losses. With approximately 3,700 gig ticket fraud reports made to Action Fraud in 2024, and almost half originating from social media platforms, the threat to festival-goers has never been greater. A Lloyds Bank analysis of scam reports from its customers has revealed that Oasis Live ’25 tickets are a top target for fraudsters. In the first month following the reunion tour announcement, these fake ticket scams made up roughly 70% of all reported concert ticket fraud cases since August 27, 2024. According to Lloyds, the average victim lost £436 ($590), with some reporting losses as high as £1,000 ($1,303).
Concert tickets have become the ultimate playground for cybercriminals, and it’s easy to see why. The perfect storm of high demand, limited supply, and emotional urgency creates ideal conditions for fraud. When your favorite artist announces a tour, tickets often sell out in minutes, leaving desperate fans scrambling on secondary markets where scammers thrive. Unlike typical retail purchases, concert tickets are intangible digital products that are difficult to verify until you’re standing at the venue gate, often too late to get your money back. Scammers exploit this by creating fake ticketing websites with legitimate-sounding names, posting counterfeit tickets on social media marketplaces, and even setting up fraudulent “last-minute deals” outside venues.
The emotional investment fans have in seeing their favorite performers makes them more likely to ignore red flags like unusual payment methods, prices that seem too good to be true, or sellers who refuse to use secure payment platforms. Add in the time pressure of limited availability, and scammers have found the perfect recipe for separating music lovers from their money. With the average concert scam victim losing over $400 according to the Better Business Bureau, what should be an exciting musical experience often becomes a costly lesson in digital fraud.
How It Works: Scammers create convincing counterfeit tickets using stolen designs, logos, and QR codes from legitimate events. They may purchase one real ticket and then sell multiple copies to different buyers, knowing only the first person through the gate will succeed.
The Digital Danger: With the rise of digital tickets and QR codes, scammers can easily screenshot, photograph, or forward ticket confirmations to multiple victims. Since many festival-goers don’t realize that QR codes can only be scanned once, multiple people may believe they own the same valid ticket.
How It Works: Fraudsters create entirely fictional festivals, remember the Fyre Festival? A complete fake lineups featuring popular artists, professional websites, and aggressive marketing campaigns. They invest heavily in making these events appear legitimate, sometimes even securing fake venues and promotional partnerships.
The Impersonator: Some scammers specifically target popular festivals by creating fake events with slight name variations or claiming to offer exclusive “VIP experiences” that don’t exist.
How It Works: Scammers create fake profiles or hack legitimate accounts to advertise sold-out festival tickets. They often target popular festival hashtags and engage with desperate fans seeking last-minute tickets on TikTok, Instagram, and Facebook Marketplace.
The FOMO Factor: These scammers exploit the fear of missing out by creating false urgency: “Only 2 tickets left!” or “Someone just backed out, quick sale needed!”
How It Works: Legitimate-seeming sellers request payment through untraceable methods like bank transfers, gift cards, or cryptocurrency. Once payment is sent, the “seller” disappears, leaving victims with no recourse for recovery.
How It Works: Fraudsters create fake QR codes that lead to malicious websites designed to steal your personal information or payment details. These might be disguised as “ticket verification” sites or fake festival apps.
The Modern Twist: Some scammers send QR codes claiming they contain your tickets, but scanning them actually downloads malware or leads to phishing sites designed to harvest your personal information.
McAfee’s Scam Detector is your shield against concert and ticket scams this summer. This advanced scam detection technology is built to spot and stop scams across text messages, emails, and videos. Here’s how Scam Detector protects concert-goers:
Scam Detector catches suspicious messages across apps like iMessage, WhatsApp, and Facebook Messenger—exactly where ticket scammers often strike.
Flags phishing emails that appear to be from venues, ticketing companies, or resale platforms across Gmail, Outlook, and Yahoo. The system alerts you and explains why an email was flagged, helping you learn to spot concert scams as you go.
Detects AI-generated or manipulated audio in videos on platforms like YouTube, TikTok, and Facebook—perfect for catching fake artist endorsements or fraudulent venue announcements that scammers use to promote fake ticket sales.
Found a great ticket deal but feeling uncertain? Upload a screenshot, message, or link for instant analysis. Scam Detector offers context so you understand exactly why a ticket offer might be fraudulent.
Choose the level of protection that works for your concert-going habits:
If you do click a suspicious ticket link, McAfee’s Scam Detector can help block dangerous sites before they load, protecting you from fake ticketing websites.
McAfee’s Scam Detector delivers reliable protection against the most common ticket scam tactics without false alarms that might block legitimate communications from venues or artists. Scam Detector uses on-device AI wherever possible, meaning your concert ticket searches and purchase communications aren’t sent to the cloud for analysis. Your excitement about seeing your favorite band stays between you and your devices.
Make This Summer About Music, Not Scams. Don’t let fraudsters steal your summer concert experience. With McAfee’s Scam Detector, you can focus on what really matters: getting legitimate tickets to see amazing live music. The technology works in the background, identifying scams and educating you along the way, so you can make confident decisions about your concert purchases.Summer festivals, arena shows, and outdoor concerts are waiting—make sure you’re protected while you’re getting ready to rock.
Learn more about McAfee’s Scam Detector at: https://www.mcafee.com/en-us/scam-detector.
The post How to Protect Yourself from Concert and Festival Ticket Scams appeared first on McAfee Blog.