Suppr超能文献

隐私与可解释性的权衡:揭示差分隐私和联邦学习对归因方法的影响

The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods.

作者信息

Saifullah Saifullah, Mercier Dominique, Lucieri Adriano, Dengel Andreas, Ahmed Sheraz

机构信息

Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Rhineland-Palatinate, Germany.

Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany.

出版信息

Front Artif Intell. 2024 Jul 3;7:1236947. doi: 10.3389/frai.2024.1236947. eCollection 2024.

Abstract

Since the advent of deep learning (DL), the field has witnessed a continuous stream of innovations. However, the translation of these advancements into practical applications has not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by the ongoing research in eXplainable AI (XAI) and privacy-preserving machine learning (PPML), which seek to address some limitations associated with these opaque and data-intensive models. Despite brisk research activity in both fields, little attention has been paid to their interaction. This work is the first to thoroughly investigate the effects of privacy-preserving techniques on explanations generated by common XAI methods for DL models. A detailed experimental analysis is conducted to quantify the impact of private training on the explanations provided by DL models, applied to six image datasets and five time series datasets across various domains. The analysis comprises three privacy techniques, nine XAI methods, and seven model architectures. The findings suggest non-negligible changes in explanations through the implementation of privacy measures. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this research marks an initial step toward resolving the challenges that hinder the deployment of AI in safety-critical settings.

摘要

自深度学习(DL)出现以来,该领域不断有创新涌现。然而,这些进展在实际应用中的转化却未能跟上步伐,尤其是在人工智能(AI)必须符合严格监管和道德标准的安全关键领域。可解释人工智能(XAI)和隐私保护机器学习(PPML)的持续研究凸显了这一点,它们旨在解决与这些不透明且数据密集型模型相关的一些局限性。尽管这两个领域的研究活动都很活跃,但它们之间的相互作用却很少受到关注。这项工作首次全面研究了隐私保护技术对深度学习模型常用XAI方法生成的解释的影响。进行了详细的实验分析,以量化隐私训练对深度学习模型提供的解释的影响,这些模型应用于六个图像数据集和五个跨不同领域的时间序列数据集。分析包括三种隐私技术、九种XAI方法和七种模型架构。研究结果表明,通过实施隐私措施,解释会发生不可忽视的变化。除了报告PPML对XAI的个体影响外,本文还针对实际应用中的技术选择给出了明确建议。通过揭示这些关键技术的相互依存关系,本研究朝着解决阻碍人工智能在安全关键环境中部署的挑战迈出了第一步。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7954/11253022/4e9a326fa32b/frai-07-1236947-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验