Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    Overwatch’s Heroes Are Getting Hotter, Here’s Why

    February 4, 2026

    Taylor Sheridan’s TV Shows, Ranked Worst to Best

    February 4, 2026

    BMW i3 Enters Final Pre-Production Phase Ahead of 2026 Global Launch

    February 4, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»AI Tools»Meeting the new ETSI standard for AI security
    Meeting the new ETSI standard for AI security
    AI Tools

    Meeting the new ETSI standard for AI security

    gvfx00@gmail.comBy gvfx00@gmail.comJanuary 16, 2026No Comments6 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The ETSI EN 304 223 standard introduces baseline security requirements for AI that enterprises must integrate into governance frameworks.

    As organisations embed machine learning into their core operations, this European Standard (EN) establishes concrete provisions for securing AI models and systems. It stands as the first globally applicable European Standard for AI cybersecurity, having secured formal approval from National Standards Organisations to strengthen its authority across international markets.

    The standard serves as a necessary benchmark alongside the EU AI Act. It addresses the reality that AI systems possess specific risks – such as susceptibility to data poisoning, model obfuscation, and indirect prompt injection – that traditional software security measures often miss. The standard covers deep neural networks and generative AI through to basic predictive systems, explicitly excluding only those used strictly for academic research.

    Table of Contents

    Toggle
      • ETSI standard clarifies the chain of responsibility for AI security
      • Executive oversight and governance
      • Related posts:
    • Saudi-led coalition says STC’s al-Zubaidi fled to UAE via Somaliland | News
    • Israel attacks on Syria: What happened, who did Israel claim it was after? | Explainer News
    • Bondi attack hero al-Ahmed, responders honoured at Australia cricket match | Cricket News

    ETSI standard clarifies the chain of responsibility for AI security

    A persistent hurdle in enterprise AI adoption is determining who owns the risk. The ETSI standard resolves this by defining three primary technical roles: Developers, System Operators, and Data Custodians.

    For many enterprises, these lines blur. A financial services firm that fine-tunes an open-source model for fraud detection counts as both a Developer and a System Operator. This dual status triggers strict obligations, requiring the firm to secure the deployment infrastructure while documenting the provenance of training data and the model’s design auditing.

    The inclusion of ‘Data Custodians’ as a distinct stakeholder group directly impacts Chief Data and Analytics Officers (CDAOs). These entities control data permissions and integrity, a role that now carries explicit security responsibilities. Custodians must ensure that the intended usage of a system aligns with the sensitivity of the training data, effectively placing a security gatekeeper within the data management workflow.

    ETSI’s AI standard makes clear that security cannot be an afterthought appended at the deployment stage. During the design phase, organisations must conduct threat modelling that addresses AI-native attacks, such as membership inference and model obfuscation.

    One provision requires developers to restrict functionality to reduce the attack surface. For instance, if a system uses a multi-modal model but only requires text processing, the unused modalities (like image or audio processing) represent a risk that must be managed. This requirement forces technical leaders to reconsider the common practice of deploying massive, general-purpose foundation models where a smaller and more specialised model would suffice.

    The document also enforces strict asset management. Developers and System Operators must maintain a comprehensive inventory of assets, including interdependencies and connectivity. This supports shadow AI discovery; IT leaders cannot secure models they do not know exist. The standard also requires the creation of specific disaster recovery plans tailored to AI attacks, ensuring that a “known good state” can be restored if a model is compromised.

    Supply chain security presents an immediate friction point for enterprises relying on third-party vendors or open-source repositories. The ETSI standard requires that if a System Operator chooses to use AI models or components that are not well-documented, they must justify that decision and document the associated security risks.

    Practically, procurement teams can no longer accept “black box” solutions. Developers are required to provide cryptographic hashes for model components to verify authenticity. Where training data is sourced publicly (a common practice for Large Language Models), developers must document the source URL and acquisition timestamp. This audit trail is necessary for post-incident investigations, particularly when attempting to identify if a model was subjected to data poisoning during its training phase.

    If an enterprise offers an API to external customers, they must apply controls designed to mitigate AI-focused attacks, such as rate limiting to prevent adversaries from reverse-engineering the model or overwhelming defences to inject poison data.

    The lifecycle approach extends into the maintenance phase, where the standard treats major updates – such as retraining on new data – as the deployment of a new version. Under the ETSI AI standard, this triggers a requirement for renewed security testing and evaluation.

    Continuous monitoring is also formalised. System Operators must analyse logs not just for uptime, but to detect “data drift” or gradual changes in behaviour that could indicate a security breach. This moves AI monitoring from a performance metric to a security discipline.

    The standard also addresses the “End of Life” phase. When a model is decommissioned or transferred, organisations must involve Data Custodians to ensure the secure disposal of data and configuration details. This provision prevents the leakage of sensitive intellectual property or training data through discarded hardware or forgotten cloud instances.

    Executive oversight and governance

    Compliance with ETSI EN 304 223 requires a review of existing cybersecurity training programmes. The standard mandates that training be tailored to specific roles, ensuring that developers understand secure coding for AI while general staff remain aware of threats like social engineering via AI outputs.

    “ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems”, said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence.

    “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.”

    Implementing these baselines in ETSI’s AI security standard provides a structure for safer innovation. By enforcing documented audit trails, clear role definitions, and supply chain transparency, enterprises can mitigate the risks associated with AI adoption while establishing a defensible position for future regulatory audits.

    An upcoming Technical Report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues like deepfakes and disinformation.

    See also: Allister Frost: Tackling workforce anxiety for AI integration success

    Banner for AI & Big Data Expo by TechEx events.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    Related posts:

    ‘No one power’ can solve global problems, says UN chief as Trump veers away | United Nations News

    Current laws are fit for the AI era

    Trump to oversee Cambodia-Thai peace deal at ASEAN summit: Malaysia FM | Donald Trump News

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article10 Highest Paying Companies in India for Data Science Roles
    Next Article Pricing Overview and Feature Breakdown
    gvfx00@gmail.com
    • Website

    Related Posts

    AI Tools

    Some in Israel question its influence over US as Iran war decision nears | Israel-Iran conflict News

    February 4, 2026
    AI Tools

    Combing the Rackspace blogfiles for operational AI pointers

    February 4, 2026
    AI Tools

    Yamal scores as Barcelona beat Albacete to reach Copa del Rey semifinals | Football News

    February 4, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    BMW Will Put eFuel In Cars Made In Germany From 2028

    October 14, 202511 Views

    Best Sonic Lego Deals – Dr. Eggman’s Drillster Gets Big Price Cut

    December 16, 20259 Views

    What is Fine-Tuning? Your Ultimate Guide to Tailoring AI Models in 2025

    October 14, 20259 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.