Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    The Steam Deck Is Out Of Stock And Valve Says Get Used To It

    February 17, 2026

    520.2M Hours Watched Later Proves This 32-Year-Old American Sitcom Is a Comfort Masterpiece

    February 17, 2026

    Volkswagen Is Selling A Convertible SUV With A Manual Gearbox. Yes, Really

    February 17, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»Tech Reviews»AI models can’t fully understand security – and they never will
    AI models can’t fully understand security – and they never will
    Tech Reviews

    AI models can’t fully understand security – and they never will

    gvfx00@gmail.comBy gvfx00@gmail.comFebruary 17, 2026No Comments5 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email



    AI-assisted coding or ‘vibe coding’ is a trend that dominated last year, so much so that it was named word of the year for 2025.

    Since OpenAI co-founder Andrej Kaparthy coined the term, developers have been wooed by the prospect of minimizing coding ‘grunt’ work.

    Even non-programmers without extensive coding knowledge have created software, leading to an uptick in generative AI use for software development.


    You may like

    Despite the hype, research from Veracode reveals large language models (LLMs) only choose secure code 55% of the time, raising fundamental security concerns.

    John Smith

    Social Links Navigation

    Whilst LLMs can predict syntax, they can’t comprehend cybersecurity. Semantics like risk are not within their grasp and the limited context window – or short-term memory – means LLMs in their current state may miss the bigger picture, including security vulnerabilities.

    Even the largest models can’t hold the kind of memory required to understand which data is dangerous and why.

    Table of Contents

    Toggle
    • The AI code vulnerability problem
    • The limits of LLM understanding – and what this means for security
    • Security still demands human judgement
      • Related posts:
    • UniFi 5G Max Review: An Awesome PoE-Enabled Cellular Modem
    • Lenovo Yoga Pro 9i 16 Aura Edition Review: Kick-Ass Laptop for Creators
    • ASUS TUF-BE6500 Review: A Good Dual-Band Wi-Fi 7 Router

    The AI code vulnerability problem

    AI-generated code can, on the surface, look correct and secure, yet subtle vulnerabilities are nearly always hidden within it.

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    LLM training has a history of prioritizing accuracy over cybersecurity. Whilst significant effort goes into improving correctness, with newer, larger models now generating highly functional code, far less attention goes to whether that code is actually secure.

    The primary issue is that developers don’t need to specify security constraints in their AI prompts to get the code they want.

    For example, a developer can prompt a model to generate a database query without specifying whether the code should construct the query using a prepared statement (safe) or string concatenation (unsafe). The choice, therefore, is left up to the model.


    You may like

    For some vulnerabilities, the model might not be able to determine which specific variables require sanitization (i.e., which variables are “tainted” by user-controlled data).

    These weaknesses are not accidental, but stem from fundamental limitations in how large language models understand context, intent and risk.

    The limits of LLM understanding – and what this means for security

    Progress toward secure AI-generated code has flatlined, with recent research finding major LLM providers had had little or no security progress despite continued development across the sector.

    For example, models from Anthropic, Google and xAI fell within a 50 to 59 percent security pass rate, notably poor for the big players in tech. But why isn’t security performance getting better, even as model writing and accuracy improves?

    A big part of the answer lies in the training data. LLMs learn by consuming huge amounts of data and information from the internet, much of which contains hidden or unresolved security vulnerabilities.

    For example, projects like WebGoat deliberately include insecure patterns to teach web application security through hands-on exercises with intentional vulnerabilities. This data isn’t labelled as secure or insecure, so the model treats both types the same. To the AI, an unsafe database query or a safe one are just two valid ways to complete your request.

    LLMs also don’t separate data from instructions. Instead, they simply process everything as a stream of tokens. This opens the door for prompt injections, where attackers can slip malicious instructions into what appears to be ordinary user input. If a model can’t tell the difference, it can be tricked into behaving in ways the developer never intended.

    It’s also important to understand that security risks show up at different layers. Some vulnerabilities stem from the model itself, with flaws in how the AI interprets or generates content.

    Others arise at the application layer, where developers integrate the LLM into real systems. In those cases, risk comes from how the system handles external input, user permissions and interactions with other components.

    For developers, generative AI tools have the potential to transform efficiency, but they should only ever be used as assistants to enhance productivity in specific tasks—not as a replacement for human skills.

    In doing so, developers must apply the same level of scrutiny to security concerns as coding accuracy. Every AI suggestion, no matter how polished it appears, should be treated as untrusted until proven secure. This means validating how user input is handled, ensuring practices like authentication and authorization are executed correctly and always assuming AI will take shortcuts.

    Organizations must also adopt “secure-by-default” tooling around the AI, to ensure security is integrated from the start. This includes real-time static analysis that flags insecure code the moment it is generated, policy enforcement that blocks unsafe patterns from being committed, and implementing guardrail processes that restrict the AI model to known secure libraries and coding patterns.

    Rather than simply relying on existing training centered around human-written code, organizations must adopt tailored training to help developers adapt to this new reality. This specialized training should cover how to recognize common LLM failure modes, how to understand malicious methods like prompt injection or data leakage and, perhaps most importantly, knowing when not to use AI, especially in high-stakes domains, like cryptography.

    Security still demands human judgement

    Generative AI absolutely can accelerate delivery and raise the floor of capability, but it also raises the stakes. The industry needs to accept an uncomfortable truth – as LLMs continue to get faster, more fluent and more helpful, they cannot be relied upon to generate secure code. They can’t reason about threat models, judge intent or recognize when harmless logic creates dangerous side effects.

    If organizations don’t put in place appropriate safeguards, secure-by-default tooling, and human oversight, they risk pushing out dangerous vulnerabilities at scale and speed.

    Ultimately, AI has changed how we build software. But unless we change how we secure it, we’re simply giving cybercriminals a larger attack surface, delivered more efficiently than ever.

    Check out our list of the best no-code platforms.

    Related posts:

    AI is so ubiquitous 'it will be more practical to fingerprint real media than fake media'

    The OnePlus 15 Is One of the Best Mainstream Phones for Gamers

    OpenAI signs massive AI compute deal with Amazon

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article30 years ago, Muppet Treasure Island redefined how we think about The Muppets
    Next Article VCF 9.0 GA Mental Model Part 2: Fleet Services vs Instance Management Planes (and Who Owns What)
    gvfx00@gmail.com
    • Website

    Related Posts

    Tech Reviews

    Should AI chatbots have ads? Anthropic says no.

    February 17, 2026
    Tech Reviews

    The best noise-canceling earbuds for 2026

    February 16, 2026
    Tech Reviews

    What Is Skimo? Everything to Know About the Newest 2026 Winter Olympic Sport

    February 16, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.