Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    BMW tuner AC Schnitzer will shutdown by end of 2026

    March 22, 2026

    Lebanon’s Aoun warns Israeli attack on bridge ‘prelude to ground invasion’ | Israel attacks Lebanon News

    March 22, 2026

    Top 10 AI Coding Assistants of 2026

    March 22, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»AI Tools»Security lapses emerge amid the global AI race
    Security lapses emerge amid the global AI race
    AI Tools

    Security lapses emerge amid the global AI race

    gvfx00@gmail.comBy gvfx00@gmail.comNovember 12, 2025No Comments5 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    According to Wiz, the race among AI companies is causing many to overlook basic security hygiene practices.

    65 percent of the 50 leading AI firms the cybersecurity firm analysed had leaked verified secrets on GitHub. The exposures include API keys, tokens, and sensitive credentials, often buried in code repositories that standard security tools do not check.

    Glyn Morgan, Country Manager for UK&I at Salt Security, described this trend as a preventable and basic error. “When AI firms accidentally expose their API keys they lay bare a glaring avoidable security failure,” he said.

    “It’s the textbook example of governance paired with a security configuration, two of the risk categories that OWASP flags. By pushing credentials into code repositories they hand attackers a golden ticket to systems, data, and models, effectively sidestepping the usual defensive layers.”

    Wiz’s report highlights the increasingly complex supply chain security risk. The problem extends beyond internal development teams; as enterprises increasingly partner with AI startups, they may inherit their security posture. The researchers warn that some of the leaks they found “could have exposed organisational structures, training data, or even private models.”

    The financial stakes are considerable. The companies analysed with verified leaks have a combined valuation of over $400 billion.

    The report, which focused on companies listed in the Forbes AI 50, provides examples of the risks:

    • LangChain was found to have exposed multiple Langsmith API keys, some with permissions to manage the organisation and list its members. This type of information is highly valued by attackers for reconnaissance.
    • An enterprise-tier API key for ElevenLabs was discovered sitting in a plaintext file.
    • An unnamed AI 50 company had a HuggingFace token exposed in a deleted code fork. This single token “allow[ed] access to about 1K private models”. The same company also leaked WeightsAndBiases keys, exposing the “training data for many private models.”

    The Wiz report suggests this problem is so prevalent because traditional security scanning methods are no longer sufficient. Relying on basic scans of a company’s main GitHub repositories is a “commoditised approach” that misses the most severe risks .

    The researchers describe the situation as an “iceberg” (i.e. the most obvious risks are visible, but the greater danger lies “below the surface”.) To find these hidden risks, the researchers adopted a three-dimensional scanning methodology they call “Depth, Perimeter, and Coverage”:

    • Depth: Their deep scan analysed the “full commit history, commit history on forks, deleted forks, workflow logs and gists”—areas most scanners “never touch”.
    • Perimeter: The scan was expanded beyond the core company organisation to include organisation members and contributors. These individuals might “inadvertently check company-related secrets into their own public repositories”. The team identified these adjacent accounts by tracking code contributors, organisation followers, and even “correlations in related networks like HuggingFace and npm.”
    • Coverage: The researchers specifically looked for new AI-related secret types that traditional scanners often miss, such as keys for platforms like WeightsAndBiases, Groq, and Perplexity.

    This expanded attack surface is particularly worrying given the apparent lack of security maturity at many fast-moving companies. The report notes that when researchers tried to disclose the leaks, almost half of disclosures either failed to reach the target or received no response. Many firms lacked an official disclosure channel or simply failed to resolve the issue when notified.

    Wiz’s findings serve as a warning for enterprise technology executives, highlighting three immediate action items for managing both internal and third-party security risk.

    1. Security leaders must treat their employees as part of their company’s attack surface. The report recommends creating a Version Control System (VCS) member policy to be applied during employee onboarding. This policy should mandate practices such as using multi-factor authentication for personal accounts and maintaining a strict separation between personal and professional activity on platforms like GitHub.
    1. Internal secret scanning must evolve beyond basic repository checks. The report urges companies to mandate public VCS secret scanning as a “non-negotiable defense”. This scanning must adopt the aforementioned “Depth, Perimeter, and Coverage” mindset to find threats lurking below the surface.
    1. This level of scrutiny must be extended to the entire AI supply chain. When evaluating or integrating tools from AI vendors, CISOs should probe their secrets management and vulnerability disclosure practices. The report notes that many AI service providers are leaking their own API keys and should “prioritise detection for their own secret types.”

    The central message for enterprises is that the tools and platforms defining the next generation of technology are being built at a pace that often outstrips security governance. As Wiz concludes, “For AI innovators, the message is clear: speed cannot compromise security”. For the enterprises that depend on that innovation, the same warning applies.

    See also: Exclusive: Dubai’s Digital Government chief says speed trumps spending in AI efficiency race

    Banner for AI & Big Data Expo by TechEx events.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    Table of Contents

    Toggle
      • Related posts:
    • How Standard Chartered runs AI under privacy rules
    • OpenAI, Google, Anthropic Launch Tools
    • Trump announces new tariffs over Greenland: How have EU allies responded? | Donald Trump News

    Related posts:

    FIFA World Cup 2026 will be the most AI-driven tournament ever. Here's the proof

    OpenAI restructures, enters ‘next chapter’ of Microsoft partnership

    Trump hosts Saudi Arabia’s Mohammed bin Salman: Five key takeaways | Politics News

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe first permanent Pokémon theme park opens in February 2026
    Next Article The AI Image Model That Could Redefine Visual Creativity
    gvfx00@gmail.com
    • Website

    Related Posts

    AI Tools

    Lebanon’s Aoun warns Israeli attack on bridge ‘prelude to ground invasion’ | Israel attacks Lebanon News

    March 22, 2026
    AI Tools

    Iran says will hit region’s energy sites if US, Israel target power plants | US-Israel war on Iran News

    March 22, 2026
    AI Tools

    Evloev upsets Murphy, sets up featherweight title shot against Volkanovski | Mixed Martial Arts News

    March 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.