Infosec In Brief A critical vulnerability in the on-prem version of Trend Micro's Apex One endpoint security platform is under active exploitation, the company admitted last week, and there's no patch available.…
def con A DEF CON hacker walks into a small-town water facility…no, this is not the setup for a joke or a (super-geeky) odd-couple rom-com. It's a true story that happened at five utilities across four states.…
DEF CON On Saturday at DEF CON, security boffin Micah Lee explained just how he published data from TeleMessage, the supposedly secure messaging app used by White House officials, which in turn led to a massive database dump of their communications.…
Defining good SLAs is a tough challenge, but it’s at the heart of any solid vulnerability management program. This article helps internal security teams set clear SLAs, define the right metrics, and adjust their ticketing system to build a successful vulnerability management program.
Meta has unleashed a groundbreaking feature that transforms Instagram from a photo-sharing platform into a real-time location broadcaster. While the company promises enhanced connectivity, cybersecurity experts are sounding alarm bells about potential dangers lurking beneath this seemingly innocent update.
Instagram’s freshly minted “Map” functionality represents a seismic shift in social media architecture. Unlike traditional posting where you deliberately choose what to share, this feature operates as an always-on location transmitter that continuously broadcasts your whereabouts to selected contacts whenever you launch the application.
The mechanism mirrors Snapchat’s infamous Snap Map, but with Instagram’s massive user base—over 2 billion active accounts—the implications for personal security amplify exponentially. This feature enables users to share their real-time location with friends and view theirs on a live map, but it also raises serious privacy concerns from targeted advertising to potential stalking and misuse in abusive relationships.
McAfee’s Chief Technology Officer Steve Grobman provides crucial context: “Features like location sharing aren’t inherently bad, but they come with tradeoffs. It’s about making informed choices. When people don’t fully understand what’s being shared or who can see it, that’s when it becomes a risk.”
Digital predators can exploit location data to track victims with unprecedented precision. Relationship and parenting experts warn location sharing can turn into a stressful or even dangerous form of control, with research showing that 19 percent of 18 to 24-year-olds think it’s reasonable to expect to track an intimate partner’s location.
Steve Grobman emphasizes the real-world implications: “There’s also a real-world safety concern. If someone knows where you are in real time, that could lead to stalking, harassment, or even assault. Location data can be powerful, and in the wrong hands, dangerous.”
Your boss, colleagues, or acquaintances might gain unwanted insights into your personal activities. Imagine explaining why you visited a competitor’s office or why you called in sick while appearing at a shopping center.
The danger often comes from within your own network. Grobman warns: “It only takes one person with bad intentions for location sharing to become a serious problem. You may think your network is made up of friends, but in many cases, people accept requests from strangers or someone impersonating a contact without really thinking about the consequences.”
While Instagram claims it doesn’t use location data from this feature for ad targeting, the platform’s history with user data suggests caution. Your movement patterns create valuable behavioral profiles for marketers.
Cybercriminals employ sophisticated data aggregation techniques. According to Grobman: “Criminals can use what’s known as the mosaic effect, combining small bits of data like your location, routines, and social posts to build a detailed profile. They can use that information to run scams against a consumer or their connections, guess security questions, or even commit identity theft.”
For iPhone Users:
For Android Users:
Method 1: Through the Map Interface
Method 2: Through Profile Settings
iPhone Security Configuration:
Android Security Setup:
After implementing these changes:
Audit Your Digital Footprint
Review all social media platforms for similar location-sharing features. Snapchat, Facebook, and TikTok offer comparable functionalities that require individual deactivation.
Implement Location Spoofing Awareness
Some users consider VPN services or location-spoofing applications, but these methods can violate platform terms of service and create additional security vulnerabilities.
Regular Security Hygiene
Establish monthly reviews of your privacy settings across all social platforms. Companies frequently update features and reset user preferences without explicit notification.
Grobman emphasizes the challenge consumers face: “Most social platforms offer privacy settings that offer fine-grained control, but the reality is many people don’t know those settings exist or don’t take the time to use them. That can lead to oversharing, especially when it comes to things like your location.”
Family Protection Protocols
If you’re a parent with supervision set up for your teen, you can control their location sharing experience on the map, get notified when they enable it, and see who they’re sharing with. Implement these controls immediately for underage family members.
Data Collection Frequency
Your location updates whenever you open the app or return to it while running in the background. This means Instagram potentially logs your position multiple times daily, creating detailed movement profiles.
Data Retention Policies
Instagram claims to hold location data for a maximum of three days, but this timeframe applies only to active sharing, not the underlying location logs the platform maintains for other purposes.
Visibility Scope
Even with location sharing disabled, you can still see others’ shared locations on the map if they’ve enabled the feature. This asymmetric visibility creates potential social pressure to reciprocate sharing.
Red Flags and Warning Signs
Monitor these indicators that suggest your privacy may be compromised:
This Instagram update represents a concerning trend toward ambient surveillance in social media. Companies increasingly normalize continuous data collection by framing it as connectivity enhancement. As consumers, we must recognize that convenience often comes at the cost of privacy.
The feature’s opt-in design provides some protection, but user reports suggest the system may automatically activate for users with older app versions who previously granted location permissions. This highlights the importance of proactive privacy management rather than reactive protection.
Immediate (Next 10 Minutes):
This Week:
Monthly Ongoing:
Grobman advises a comprehensive approach: “The best thing you can do is stay aware and take control. Review your app permissions, think carefully before you share, and use tools that help protect your privacy. McAfee+ includes identity monitoring, scam detection. McAfee’s VPN keeps your IP address private, but if a consumer allows an application to identify its location via GPS or other location services, VPNs will not protect location in that scenario. Staying safe online is always a combination of the best technology along with good digital street smarts.”
Remember: Your location data tells the story of your life—where you work, live, worship, shop, and spend leisure time. Protecting this information isn’t paranoia; it’s fundamental digital hygiene in our hyper-connected world.
The choice to share your location should always remain yours, made with full awareness of the implications. By implementing these protective measures, you’re taking control of your digital footprint and safeguarding your personal security in an increasingly surveilled digital landscape.
The post Instagram’s New Tracking Feature: What You Need to Know to Stay Safe appeared first on McAfee Blog.
A new documentary series about cybercrime airing next month on HBO Max features interviews with Yours Truly. The four-part series follows the exploits of Julius Kivimäki, a prolific Finnish hacker recently convicted of leaking tens of thousands of patient records from an online psychotherapy practice while attempting to extort the clinic and its patients.
The documentary, “Most Wanted: Teen Hacker,” explores the 27-year-old Kivimäki’s lengthy and increasingly destructive career, one that was marked by cyber attacks designed to result in real-world physical impacts on their targets.
By the age of 14, Kivimäki had fallen in with a group of criminal hackers who were mass-compromising websites and milking them for customer payment card data. Kivimäki and his friends enjoyed harassing and terrorizing others by “swatting” their homes — calling in fake hostage situations or bomb threats at a target’s address in the hopes of triggering a heavily-armed police response to that location.
On Dec. 26, 2014, Kivimäki and fellow members of a group of online hooligans calling themselves the Lizard Squad launched a massive distributed denial-of-service (DDoS) attack against the Sony Playstation and Microsoft Xbox Live platforms, preventing millions of users from playing with their shiny new gaming rigs the day after Christmas. The Lizard Squad later acknowledged that the stunt was planned to call attention to their new DDoS-for-hire service, which came online and started selling subscriptions shortly after the attack.
Finnish investigators said Kivimäki also was responsible for a 2014 bomb threat against former Sony Online Entertainment President John Smedley that grounded an American Airlines plane. That incident was widely reported to have started with a Twitter post from the Lizard Squad, after Smedley mentioned some upcoming travel plans online. But according to Smedley and Finnish investigators, the bomb threat started with a phone call from Kivimäki.
Julius “Zeekill” Kivimaki, in December 2014.
The creaky wheels of justice seemed to be catching up with Kivimäki in mid-2015, when a Finnish court found him guilty of more than 50,000 cybercrimes, including data breaches, payment fraud, and operating a global botnet of hacked computers. Unfortunately, the defendant was 17 at the time, and received little more than a slap on the wrist: A two-year suspended sentence and a small fine.
Kivimäki immediately bragged online about the lenient sentencing, posting on Twitter that he was an “untouchable hacker god.” I wrote a column in 2015 lamenting his laughable punishment because it was clear even then that this was a person who enjoyed watching other people suffer, and who seemed utterly incapable of remorse about any of it. It was also abundantly clear to everyone who investigated his crimes that he wasn’t going to quit unless someone made him stop.
In response to some of my early reporting that mentioned Kivimäki, one reader shared that they had been dealing with non-stop harassment and abuse from Kivimäki for years, including swatting incidents, unwanted deliveries and subscriptions, emails to her friends and co-workers, as well as threatening phonecalls and texts at all hours of the night. The reader, who spoke on condition of anonymity, shared that Kivimäki at one point confided that he had no reason whatsoever for harassing her — that she was picked at random and that it was just something he did for laughs.
Five years after Kivimäki’s conviction, the Vastaamo Psychotherapy Center in Finland became the target of blackmail when a tormentor identified as “ransom_man” demanded payment of 40 bitcoins (~450,000 euros at the time) in return for a promise not to publish highly sensitive therapy session notes Vastaamo had exposed online.
Ransom_man, a.k.a. Kivimäki, announced on the dark web that he would start publishing 100 patient profiles every 24 hours. When Vastaamo declined to pay, ransom_man shifted to extorting individual patients. According to Finnish police, some 22,000 victims reported extortion attempts targeting them personally, targeted emails that threatened to publish their therapy notes online unless paid a 500 euro ransom.
In October 2022, Finnish authorities charged Kivimäki with extorting Vastaamo and its patients. But by that time he was on the run from the law and living it up across Europe, spending lavishly on fancy cars, apartments and a hard-partying lifestyle.
In February 2023, Kivimäki was arrested in France after authorities there responded to a domestic disturbance call and found the defendant sleeping off a hangover on the couch of a woman he’d met the night before. The French police grew suspicious when the 6′ 3″ blonde, green-eyed man presented an ID that stated he was of Romanian nationality.
A redacted copy of an ID Kivimaki gave to French authorities claiming he was from Romania.
In April 2024, Kivimäki was sentenced to more than six years in prison after being convicted of extorting Vastaamo and its patients.
The documentary is directed by the award-winning Finnish producer and director Sami Kieski and co-written by Joni Soila. According to an August 6 press release, the four 43-minute episodes will drop weekly on Fridays throughout September across Europe, the U.S, Latin America, Australia and South-East Asia.
DEF CON A cache of documents uncovered by Vanderbilt University has revealed disturbing details about how a Chinese company is building up a database of US politicians and influencers with whom to share propaganda.…
I came across a broken link hijacking case involving a Google Play Store package. The app link returns a 404, and the package name is currently unclaimed.which means it can potentially be taken over. It’s a valid security issue and could be eligible for a bug bounty, though I'm not 100% sure.
The company asked for a working proof of concept, meaning the package has to actually be claimed and uploaded to the Play Store. I haven’t created a developer account myself yet, since I haven’t needed one except for this case and it requires a $25 fee.
If you already have a developer account, would you be willing to contribute by uploading a simple placeholder app using that package name, just to prove the takeover? If the report gets rewarded, I’ll share 10% of the bounty with you. Usually, these types of reports are rewarded with $50 or $100, so I hope you understand I can’t offer more than 10%.
Let me know if you’re open to it.
Thanks!
As Trixie gets ready to début, a little-known app is hogging the limelight: StarDict, which sends whatever text you select, unencrypted, to servers in China.…
Here’s a prediction that might ruffle some feathers: The engineers who struggle most in the AI revolution won’t be those who can’t adapt to new frameworks or learn new languages. It’ll be those who can’t master the art of contextualization.
I’m talking about engineers with lower emotional intelligence — brilliant problem-solvers who know exactly what to do and how to do it, but struggle with the subtleties of knowledge transfer. They can debug complex systems and architect elegant solutions, but ask them to explain their reasoning, prioritize information, or communicate nuanced requirements? That’s where things get messy.
In the pre-AI world, this was manageable. Code was the primary interface. Documentation was optional. Communication happened in pull requests and stack overflow posts. But AI has fundamentally changed the game.
Context engineering is the practice of providing AI systems with the precise “mental material” they need to achieve goals effectively. It’s not just prompt writing or RAG implementation — it’s cognitive architecture. When you hire a new team member, you don’t just hand them a task and walk away. You provide context. You explain the company culture, the project history, the constraints, the edge cases, and the unspoken rules. You share your mental model of the problem space. Context engineering is doing exactly this, but for AI systems.
This shift reveals something interesting: Engineers with lower emotional intelligence often excel at technical execution but struggle with the nuanced aspects of knowledge transfer — deciding what information to share versus omit, expressing complex ideas clearly, and distinguishing between ephemeral and durable knowledge. These communication and prioritization skills, once considered “soft,” are now core technical competencies in context engineering. But let’s move beyond the EQ discussion — the real transformation is much bigger.
Mental material encompasses far more than simple data or documentation. It includes declarative knowledge (facts, data, documentation), procedural knowledge (how to approach problems, methodologies), conditional knowledge (when to apply different strategies), meta-knowledge (understanding about the knowledge itself), contextual constraints (what’s relevant vs. irrelevant for specific tasks), long-term memory (stable patterns, preferences, and principles that rarely change), and short-term memory (session-specific context, recent decisions, and ephemeral state that helps maintain coherence within a particular interaction).
Traditional engineering was about building systems. AI engineering is about designing cognitive architectures. You’re not just writing code — you’re crafting how artificial minds understand and approach problems. This means your daily work now includes memory architecture (deciding what information gets stored where, how it’s organized, and when it gets retrieved — not database design, but epistemological engineering), context strategy (determining what mental material an AI needs for different types of tasks), knowledge curation (maintaining the quality and relevance of information over time, as mental material degrades and becomes outdated), cognitive workflow design (orchestrating how AI systems access, process, and apply contextual information), and metacognitive monitoring (analyzing whether the context strategies are working and adapting them based on outcomes).
The engineers who thrive will be those who can bridge technical precision with cognitive empathy — understanding not just how systems work, but how to help artificial minds understand and reason about problems. This transformation isn’t just about new tools or frameworks. It’s about fundamentally reconceptualizing what engineering means in an AI-first world.
We’ve built sophisticated AI systems that can reason, write, and solve complex problems, yet we’re still manually feeding them context like we’re spoon-feeding a child. Every AI application faces the same fundamental challenge: How do you help an artificial mind understand what it needs to know?
Currently, we solve this through memory storage systems that dump everything into databases, prompt templates that hope to capture the right context, RAG systems that retrieve documents but don’t understand relevance, and manual curation that doesn’t scale. But nothing that truly understands the intentionality behind a request and can autonomously determine what mental material is needed. We’re essentially doing cognitive architecture manually, request by request, application by application.
This brings us to a fascinating philosophical question: What would truly intelligent context orchestration look like? Imagine a system that operates as a cognitive intermediary — analyzing not just what someone is asking, but understanding the deeper intentionality behind the request.
Consider this example: “Help me optimize this database query — it’s running slow.” Most systems provide generic query optimization tips, but intelligent context orchestration would perform cognitive analysis to understand that this performance issue has dramatically different underlying intents based on context.
If it’s a junior developer, they need procedural knowledge (how to analyze execution plans) plus declarative knowledge (indexing fundamentals) plus short-term memory (what they tried already this session). If it’s a senior developer under deadline pressure, they need conditional knowledge (when to denormalize vs. optimize) plus long-term memory (this person prefers pragmatic solutions) plus contextual constraints (production system limitations). If it’s an architect reviewing code, they need meta-knowledge (why this pattern emerged) plus procedural knowledge (systematic performance analysis) plus declarative knowledge (system-wide implications).
Context-dependent realities might reveal the “slow query” isn’t actually a query problem — maybe it’s running in a resource-constrained Docker container, or it’s an internal tool used infrequently where 5 milliseconds doesn’t matter. Perhaps the current query is intentionally slower because the optimized version would sacrifice readability (violating team guidelines), and the system should suggest either a local override for performance-critical cases or acceptance of the minor delay.
The problem with even perfect prompts is clear: You could craft the world’s best prompt about database optimization, but without understanding who is asking, why they’re asking, and what they’ve already tried, you’re essentially giving a lecture to someone who might need a quick fix, a learning experience, or a strategic decision framework. And even if you could anticipate every scenario, you’d quickly hit token limits trying to include all possible contexts in a single prompt. The context strategy must determine not just what information to provide, but what type of mental scaffolding the person needs to successfully integrate that information — and dynamically assemble only the relevant context for that specific interaction.
This transformation raises profound questions about the nature of intelligence and communication. What does it mean to “understand” a request? When we ask an AI to help with a coding problem, are we asking for code, explanation, learning, validation, or something else entirely? Human communication is layered with implied context and unspoken assumptions. How do we formalize intuition? Experienced engineers often “just know” what information is relevant for a given situation. How do we encode that intuitive understanding into systems? What is the relationship between knowledge and context? The same piece of information can be useful or distracting depending on the cognitive frame it’s presented within.
These aren’t just technical challenges — they’re epistemological ones. We’re essentially trying to formalize how minds share understanding with other minds.
This transformation requires fundamentally reconceptualizing what engineering means in an AI-first world, but it’s crucial to understand that we’re not throwing decades of engineering wisdom out the window. All the foundational engineering knowledge you’ve accumulated — design patterns, data structures and algorithms, system architecture, software engineering principles (SOLID, DRY, KISS), database design, distributed systems concepts, performance optimization, testing methodologies, security practices, code organization and modularity, error handling and resilience patterns, scalability principles, and debugging methodologies — remains incredibly valuable.
This knowledge serves a dual purpose in the AI era. First, it enables you to create better mental material by providing AI systems with proven patterns, established principles, and battle-tested approaches rather than ad-hoc solutions. When you teach an AI about system design, you’re drawing on decades of collective engineering wisdom about what works and what doesn’t. Second, this deep technical knowledge allows you to act as an intelligent co-pilot, providing real-time feedback and corrections as AI systems work through problems. You can catch when an AI suggests an anti-pattern, guide it toward more robust solutions, or help it understand why certain trade-offs matter in specific contexts.
Importantly, these real-time corrections and refinements should themselves become part of the mental material. When you guide an AI away from a poor architectural choice or toward a better algorithm, that interaction should be captured and integrated into the system’s knowledge base, making it progressively more precise and aligned with good engineering practices over time.
Traditional engineering focused on deterministic systems, optimized for performance and reliability, measured success by uptime and speed, and treated communication as secondary to functionality. AI engineering designs probabilistic, context-dependent systems, optimizes for effectiveness and adaptability, measures success by goal achievement and learning, and makes communication a core technical competency — but it builds on all the foundational principles that make software systems robust and maintainable.
If you’re an engineer reading this, here’s how to prepare for the mental material revolution: Develop context awareness by thinking about the knowledge transfer patterns in your current work. How do you onboard new team members? How do you document complex decisions? These skills directly translate to context engineering. Practice explanatory engineering by forcing yourself to articulate not just what you’re building, but why, how, and when. Write documentation as if you’re teaching someone who’s brilliant but has no context about your domain. Study cognitive architecture to understand how humans process information, make decisions, and apply knowledge — this will help you design better AI context strategies. Build context systems by experimenting with prompt engineering, RAG systems, and memory management. Embrace the meta-layer and get comfortable with systems that manage other systems, as context orchestration is inherently meta-engineering.
We’re entering an era where the most valuable engineers won’t be those who can write the most elegant algorithms, but those who can design the most effective cognitive architectures. The ability to understand, communicate, and orchestrate mental material will become as fundamental as understanding data structures and algorithms.
The question isn’t whether this transformation will happen — it’s already underway. The question is whether you’ll be building the mental scaffolding that powers the next generation of AI systems, or whether you’ll be left behind trying to manually manage context in an increasingly automated world. Your emotional intelligence isn’t just a nice-to-have soft skill anymore. It’s becoming your most valuable engineering asset.
The mental material revolution is here. Are you ready to become a cognitive architect?
What’s your experience with context engineering? Are you already seeing this shift in your organization? Share your thoughts and let’s discuss how we can build better mental material orchestration systems together.
While building an MCP server last week, I got curious about what Claude CLI stores locally on my machine.
A simple 24-hour monitoring experiment revealed a significant security blind spot that most developers aren't aware of.
• API keys for multiple services (OpenAI, GitHub, AWS) • Database connection strings with credentials • Detailed tech stack and architecture discussions • Team processes and organizational context • Personal debugging patterns and approaches
All stored locally in plain text, searchable, and organized by timestamp.
Adoption reality: 500K+ developers now use AI coding assistants daily
Security awareness: Most teams haven't considered what's being stored locally
The disconnect: We're moving fast on AI integration but haven't updated our security practices to match
Traditional security assumes attackers need time and expertise to map your systems. AI conversation logs change that equation - they contain pre-analyzed intelligence about your infrastructure, complete with context and explanations.
It's like having detailed reconnaissance already done, just sitting in text files.
"But if someone has my laptop, I'm compromised anyway, right?"
This is the pushback I keep hearing, and it misses the key difference:
Traditional laptop access = attackers hunt through scattered files for days/weeks AI conversation logs = complete, contextualized intelligence report you personally wrote
Instead of reverse-engineering your setup, they get: "I'm connecting to our MongoDB cluster at mongodb://admin:password@prod-server - can you help debug this?"
The reconnaissance work is already done. They just read your explanations.
Claude initially refused to help me build a monitoring script, thinking I was trying to attack a system. Yet the same AI would likely help an attacker who asked politely about "monitoring their own files for research."
I've written up the full technical discovery process, including the monitoring methodology and security implications.
Read the complete analysis: [https://medium.com/@gabi.beyo/the-silent-security-crisis-how-ai-coding-assistants-are-creating-perfect-attack-blueprints-71fd375d51a3]
How is your team handling AI conversation data? Are local storage practices part of your security discussions?
Comment Roger Cressey served two US presidents as a senior cybersecurity and counter-terrorism advisor and currently worries he'll experience a "political aneurysm" due to Microsoft's many security messes.…
Black hat A trio of researchers has disclosed a major prompt injection vulnerability in Google's Gemini large language model-powered applications.…
updated Privacy groups report a surge in UK police facial recognition scans of databases secretly stocked with passport photos lacking parliamentary oversight.…
Amid the furor around surging VPN usage in the UK, many users are eyeing proxies as a potential alternative to the technology.…
Opinion You might think, since I write about tech all the time, my degrees are in computer science. Nope. I'm a bona fide, degreed historian, which is why I can say with confidence that the UK's recently passed Online Safety Act is doomed to fail.…
Angry Magpie and Copycat