Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    A Single-Player Final Fantasy 14 Could Happen If It Weren't For One Big Problem, Director Yoshida Says

    April 26, 2026

    26-Year-Old Man Arrested Over ‘Legend of Aang’ Leak

    April 26, 2026

    BMW iX3 Long Wheelbase Reveals Front Leg Rest In Exclusive Photos

    April 26, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»AI Tools»How to prepare for and remediate an AI system incident
    How to prepare for and remediate an AI system incident
    AI Tools

    How to prepare for and remediate an AI system incident

    gvfx00@gmail.comBy gvfx00@gmail.comApril 26, 2026No Comments4 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    For all the possibilities AI gives us, there is always a chance of the technology malfunctioning or becoming compromised. In the event of an AI system crisis, new research from ISACA has found that the majority of organisations surveyed couldn’t explain how quickly they could stop an AI system emergency, or even report on what caused the issue.

    According to ISACA’s report, 59% of digital trust professionals didn’t understand how quickly their organisation could interrupt and halt an AI system during a security incident. Just 21% reported that they could meaningfully step in in half an hour. The indicates a landscape where corrupted AI systems can continue to operate unchecked, leading to a risk of irreversible damage.

    Ali Sarrafi, CEO & Founder of Kovant, an autonomous enterprise platform, said, “ISACA’s findings point to a major structural issue in the way that organisations are deploying AI. Systems are being embedded into critical workflows without the governance layer needed to supervise and audit their actions. If a business cannot quickly halt an AI system, explain its behaviour, or even identify who is to be held accountable, the business is not in control of that system.”

    Table of Contents

    Toggle
    • AI failures and risks
      • Related posts:
    • Fact check: Do quarter of US’s ‘drug boat’ searches find nothing? | Drugs News
    • The US-Israeli war on Iran could rewrite Gulf security calculations | Israel-Iran conflict
    • Chelsea sack Rosenior after only three months at Premier League club | Football News

    AI failures and risks

    In all, only 42% of respondents expressed any confidence in their organisation being able to analyse and clarify serious AI incidents, thus leading to possible operational failures and security risks. Moreover, without explaining these incidents to regulators and leadership, businesses may face legal penalties and public backlash.

    Proper analysis is needed to learn from mistakes. Without a clear understanding, the likelihood of repeated incidents only increases. It’s important is to manage AI responsibly, with effective AI governance, yet ISACA’s findings indicate this is often missing.

    Accountability is another fuzzy area with 20% reporting that they do not know who would be responsible if an AI system caused damage. Just 38% identified the Board or an Executive as ultimately responsible.

    Sarrafi noted that slowing down AI adoption is not the answer; instead, rethinking how it is managed is key. “AI systems need to sit in a structured management layer that treats them as digital employees, with clear ownership, defined escalation paths, and the ability to be paused or overridden instantly when risk thresholds are crossed. The way, agents stop being mysterious bots and become systems you can inspect and trust. As AI becomes more deeply embedded in core business functions, governance cannot be an afterthought. It has to be built into the architecture from day one, with visibility and control designed in at every level. The organisations that get this right will not reduce risk, they will be the ones that can confidently scale AI in the business.”

    There is some reassurance, however, with 40% of respondents saying humans approve almost all AI actions before being deployed, and a further 26% evaluate AI outcomes. That being said, without an improved governance infrastructure, human oversight is unlikely to be enough to identify and resolve issues before escalating.

    ISACA’s findings point towards a major structural issue in how AI is being deployed in different sectors. With over a third of organisations not requiring their employees to disclose where and when AI is used in work products, the potential for blind spots increases.

    Despite more stringent regulations that make senior leadership more accountable, organisations are failing to implement and use AI safely and effectively. It seems many businesses are treating AI risk as a technical problem, not as something that requires careful management in the entire organisation.

    Change to how the integration and actions of AI are handled is essential. Without proper governance and accountability, businesses are not in control of their AI systems. Without control, even the smallest errors could cause reputational and financial harm that many businesses may not recover from.

    (Image by Foundry Co from Pixabay)

     

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    Related posts:

    Anthropic's refusal to arm AI is exactly why the UK wants it

    Optimism for AI-powered productivity: Deloitte

    Mali fuel crisis spirals amid armed group blocking supplies to capital | Conflict News

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleChatGPT Images 2.0 vs Nano Banana 2: The Better Model is…..
    Next Article BMW iX3 Long Wheelbase Reveals Front Leg Rest In Exclusive Photos
    gvfx00@gmail.com
    • Website

    Related Posts

    AI Tools

    Trump unhurt after shots fired at White House correspondents’ dinner | Donald Trump News

    April 26, 2026
    AI Tools

    AI in law firms entering its closing summaries

    April 25, 2026
    AI Tools

    US to allow Venezuelan government to cover Maduro’s lawyer fees | Nicolas Maduro News

    April 25, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025139 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202516 Views

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025139 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202516 Views

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.