Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    Watch the Battlefield 6 Season 2 gameplay reveal here

    February 12, 2026

    ‘Spider-Noir’ Show Will Be Available in Color and Black-And-White

    February 12, 2026

    2026 KGM Musso EV review

    February 12, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»AI Tools»State-Sponsored Hackers Exploit AI in Cyberattacks: Google
    State-Sponsored Hackers Exploit AI in Cyberattacks: Google
    AI Tools

    State-Sponsored Hackers Exploit AI in Cyberattacks: Google

    gvfx00@gmail.comBy gvfx00@gmail.comFebruary 12, 2026No Comments5 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    State-sponsored hackers are exploiting highly-advanced tooling to accelerate their particular flavours of cyberattacks, with threat actors from Iran, North Korea, China, and Russia using models like Google’s Gemini to further their campaigns. They are able to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).

    The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have begun to use artificial intelligence in the attack lifecycle – reconnaissance, social engineering, and eventually, malware development. This activity has become apparent thanks to the GTIG’s work during the final quarter of 2025.

    “For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in their report.

    Table of Contents

    Toggle
      • Reconnaissance by state-sponsored hackers targets the defence sector
      • Model extraction attacks surge
      • AI-integrated malware emerges
      • ClickFix campaigns abuse AI chat platforms
      • Underground marketplace thrives on stolen API keys
      • Google’s response and mitigations
      • Related posts:
    • Do Russia and China pose a national security threat to the US in Greenland? | Donald Trump News
    • Residents emerge in DR Congo’s tense Uvira after M23 rebel takeover | News
    • NVIDIA GPUs to power Oracle's next-gen enterprise AI services

    Reconnaissance by state-sponsored hackers targets the defence sector

    Iranian threat actor APT42 is reported as having used Gemini to augment its reconnaissance and targeted social engineering operations. The group used an AI to create official-seeming email addresses for specific entities and then conducted research to establish credible pretexts for approaching targets.

    APT42 crafted personas and scenarios designed to better elicit engagement by their targets, translating between languages and deploying natural, native phrases that helped it get round traditional phishing red flags, such as poor grammar or awkward syntax.

    North Korean government-backed actor UNC2970, which focuses on defence targeting and impersonating corporate recruiters, used Gemini to help it profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defence companies, mapping specific technical job roles, and gathering salary information.

    “This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted.

    Model extraction attacks surge

    Beyond operational misuse, Google DeepMind and GTIG identified a increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models.

    One campaign targeting Gemini’s reasoning abilities involved the collation and use of over 100,000 prompts designed to coerce the model into outputting reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages in various tasks.

    How model extraction attacks work to steal AI intellectual property. (Image: Google GTIG)

    While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic.

    Google’s systems recognised these attacks in real-time and deployed defences to protect internal reasoning traces.

    AI-integrated malware emerges

    GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach.

    HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artefacts on disk.

    HONESTCUE malware’s two-stage attack process using Gemini’s API. (Image: Google GTIG)

    Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI.

    ClickFix campaigns abuse AI chat platforms

    In a novel social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems.

    Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage.

    The three-stage ClickFix attack chain exploiting AI chat platforms. (Image: Google GTIG)

    Underground marketplace thrives on stolen API keys

    GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials.

    One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys.

    Google’s response and mitigations

    Google has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.\

    “We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated.

    GTIG emphasised that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape.

    The findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to use the technology’s abilities.

    For enterprise security teams, particularly in the Asia-Pacific region where Chinese and North Korean state-sponsored hackers remain active, the report serves as an important reminder to enhance defences against AI-augmented social engineering and reconnaissance operations.

    (Photo by SCARECROW artworks)

    See also: Anthropic just revealed how AI-orchestrated cyberattacks actually work – Here’s what enterprises need to know

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    Related posts:

    Chinese hyperscalers and industry-specific agentic AI

    Russia-Ukraine war: List of key events, day 1,348 | Russia-Ukraine war News

    EY and NVIDIA to help companies test and deploy physical AI

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBuilding Practical MLOps for a Personal ML Project
    Next Article Unrestricted AI Video Generator (Without Watermark)
    gvfx00@gmail.com
    • Website

    Related Posts

    AI Tools

    Compromised peace? Oslo Accords figure deeply linked to Epstein network | Israel-Palestine conflict

    February 12, 2026
    AI Tools

    How insurance leaders use agentic AI to cut operational costs

    February 12, 2026
    AI Tools

    Struggling to get by: Behind the US underemployment crisis | Unemployment News

    February 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.