Suppr超能文献

使用 STRIDE 对 AI/ML 系统的威胁进行建模。

Modeling Threats to AI-ML Systems Using STRIDE.

机构信息

Department of Computer Science, Università Degli Studi di Milano, 20133 Milan, Italy.

Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi 127788, United Arab Emirates.

出版信息

Sensors (Basel). 2022 Sep 3;22(17):6662. doi: 10.3390/s22176662.

Abstract

The application of emerging technologies, such as Artificial Intelligence (AI), entails risks that need to be addressed to ensure secure and trustworthy socio-technical infrastructures. Machine Learning (ML), the most developed subfield of AI, allows for improved decision-making processes. However, ML models exhibit specific vulnerabilities that conventional IT systems are not subject to. As systems incorporating ML components become increasingly pervasive, the need to provide security practitioners with threat modeling tailored to the specific AI-ML pipeline is of paramount importance. Currently, there exist no well-established approach accounting for the entire ML life-cycle in the identification and analysis of threats targeting ML techniques. In this paper, we propose an methodology--for assessing the security of AI-ML-based systems. We discuss how to apply the FMEA process to identify how assets generated and used at different stages of the ML life-cycle may fail. By adapting Microsoft's STRIDE approach to the AI-ML domain, we map potential ML failure modes to threats and security properties these threats may endanger. The proposed methodology can assist ML practitioners in choosing the most effective security controls to protect ML assets. We illustrate STRIDE-AI with the help of a real-world use case selected from the TOREADOR H2020 project.

摘要

新兴技术的应用,如人工智能(AI),需要解决其中的风险,以确保安全和值得信赖的社会技术基础设施。机器学习(ML)是 AI 中最发达的子领域,它可以改善决策过程。然而,ML 模型表现出特定的弱点,这是传统 IT 系统所没有的。随着包含 ML 组件的系统变得越来越普及,为安全从业者提供针对特定 AI-ML 管道量身定制的威胁建模变得至关重要。目前,在识别和分析针对 ML 技术的威胁时,还没有一种成熟的方法可以考虑整个 ML 生命周期。在本文中,我们提出了一种评估基于 AI-ML 的系统安全性的方法。我们讨论了如何应用 FMEA 过程来确定在 ML 生命周期的不同阶段生成和使用的资产可能会出现什么故障。通过将微软的 STRIDE 方法应用于 AI-ML 领域,我们将潜在的 ML 故障模式映射到威胁以及这些威胁可能危及的安全属性。所提出的方法可以帮助 ML 从业者选择最有效的安全控制措施来保护 ML 资产。我们借助来自 TOREADOR H2020 项目的一个真实用例来展示 STRIDE-AI。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1c8e/9459912/3eac21b5587b/sensors-22-06662-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验