Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    BMW Teases New 3 Series Touring. Explains Why The Wagon Lives On

    May 13, 2026

    Hugging Face hosted malicious software masquerading as OpenAI release

    May 13, 2026

    10 GitHub Repositories to Master Self-Hosting

    May 13, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»AI Tools»Examining the major AI security threat
    Examining the major AI security threat
    AI Tools

    Examining the major AI security threat

    gvfx00@gmail.comBy gvfx00@gmail.comOctober 22, 2025No Comments5 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Security experts at JFrog have found a ‘prompt hijacking’ threat that exploits weak spots in how AI systems talk to each other using MCP (Model Context Protocol).

    Business leaders want to make AI more helpful by directly using company data and tools. But, hooking AI up like this also opens up new security risks, not in the AI itself, but in how it’s all connected. This means CIOs and CISOs need to think about a new problem: keeping the data stream that feeds AI safe, just like they protect the AI itself.

    Table of Contents

    Toggle
      • Why AI attacks targeting protocols like MCP are so dangerous
      • How this MCP prompt hijacking attack works
      • What should AI security leaders do?
      • Related posts:
    • Google reveals its own version of Apple’s AI cloud
    • Can Europe reduce its dependence on the US and at what cost? | Business and Economy
    • Ethiopia confirms first outbreak of Marburg virus | Health News

    Why AI attacks targeting protocols like MCP are so dangerous

    AI models – no matter if they’re on Google, Amazon, or running on local devices – have a basic problem: they don’t know what’s happening right now. They only know what they were trained on. They don’t know what code a programmer is working on or what’s in a file on a computer.

    The boffins at Anthropic created the MCP to fix this. MCP is a way for AI to connect to the real world, letting it safely use local data and online services. It’s what lets an assistant like Claude understand what this means when you point to a piece of code and ask it to rework this.

    However, JFrog’s research shows that a certain way of using MCP has a prompt hijacking weakness that can turn this dream AI tool into a nightmare security problem.

    Imagine that a programmer asks an AI assistant to recommend a standard Python tool for working with images. The AI should suggest Pillow, which is a good and popular choice. But, because of a flaw (CVE-2025-6515) in the oatpp-mcp system, someone could sneak into the user’s session. They could send their own fake request and the server would treat it like it came from the real user.

    So, the programmer gets a bad suggestion from the AI assistant recommending a fake tool called theBestImageProcessingPackage. This is a serious attack on the software supply chain. Someone could use this prompt hijacking to inject bad code, steal data, or run commands, all while looking like a helpful part of the programmer’s toolkit.

    How this MCP prompt hijacking attack works

    This prompt hijacking attack messes with the way the system communicates using MCP, rather than the security of the AI itself. The specific weakness was found in the Oat++ C++ system’s MCP setup, which connects programs to the MCP standard.

    The issue is in how the system handles connections using Server-Sent Events (SSE). When a real user connects, the server gives them a session ID. However, the flawed function uses the computer’s memory address of the session as the session ID. This goes against the protocol’s rule that session IDs should be unique and cryptographically secure.

    This is a bad design because computers often reuse memory addresses to save resources. An attacker can take advantage of this by quickly creating and closing lots of sessions to record these predictable session IDs. Later, when a real user connects, they might get one of these recycled IDs that the attacker already has.

    Once the attacker has a valid session ID, they can send their own requests to the server. The server can’t tell the difference between the attacker and the real user, so it sends the malicious responses back to the real user’s connection.

    Even if some programs only accept certain responses, attackers can often get around this by sending lots of messages with common event numbers until one is accepted. This lets the attacker mess up the model’s behaviour without changing the AI model itself. Any company using oatpp-mcp with HTTP SSE enabled on a network that an attacker can access is at risk.

    What should AI security leaders do?

    The discovery of this MCP prompt hijacking attack is a serious warning for all tech leaders, especially CISOs and CTOs, who are building or using AI assistants. As AI becomes more and more a part of our workflows through protocols like MCP, it also gains new risks. Keeping the area around the AI safe is now a top priority.

    Even though this specific CVE affects one system, the idea of prompt hijacking is a general one. To protect against this and similar attacks, leaders need to set new rules for their AI systems.

    First, make sure all AI services use secure session management. Development teams need to make sure servers create session IDs using strong, random generators. This should be a must-have on any security checklist for AI programs. Using predictable identifiers like memory addresses is not okay.

    Second, strengthen the defenses on the user side. Client programs should be designed to reject any event that doesn’t match the expected IDs and types. Simple, incrementing event IDs are at risk of spraying attacks and need to be replaced with unpredictable identifiers that don’t collide.

    Finally, use zero-trust principles for AI protocols. Security teams need to check the entire AI setup, from the basic model to the protocols and middleware that connect it to data. These channels need strong session separation and expiration, like the session management used in web applications.

    This MCP prompt hijacking attack is a perfect example of how a known web application problem, session hijacking, is showing up in a new and dangerous way in AI. Securing these new AI tools means applying these strong security basics to stop attacks at the protocol level.

    See also: How AI adoption is moving IT operations from reactive to proactive

    Banner for AI & Big Data Expo by TechEx events.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    Related posts:

    Japan beat Australia to lift Women’s Asian Cup title | Football News

    Cisco AI router solves data centre interconnect challenge

    Moving experimental pilots to AI production

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleLenovo Laptop (40GB RAM, 1TB SSD, i7) Down 74% on Amazon, and It’s Not Refurbished
    Next Article Creating AI that matters | MIT News
    gvfx00@gmail.com
    • Website

    Related Posts

    AI Tools

    Hugging Face hosted malicious software masquerading as OpenAI release

    May 13, 2026
    AI Tools

    Tens of thousands protest in Argentina over Milei university cuts | Protests News

    May 13, 2026
    AI Tools

    JBS Dev: On imperfect data and the AI last mile – from model capability to cost sustainability

    May 13, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025151 Views

    Every Clue That Tony Stark Was Always Doctor Doom

    October 20, 202584 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202578 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025151 Views

    Every Clue That Tony Stark Was Always Doctor Doom

    October 20, 202584 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202578 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.