Suppr超能文献

隐私与可解释性的权衡:揭示差分隐私和联邦学习对归因方法的影响

The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods.

作者信息

Saifullah Saifullah, Mercier Dominique, Lucieri Adriano, Dengel Andreas, Ahmed Sheraz

机构信息

Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Rhineland-Palatinate, Germany.

Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany.

出版信息

Front Artif Intell. 2024 Jul 3;7:1236947. doi: 10.3389/frai.2024.1236947. eCollection 2024.

Abstract

Since the advent of deep learning (DL), the field has witnessed a continuous stream of innovations. However, the translation of these advancements into practical applications has not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by the ongoing research in eXplainable AI (XAI) and privacy-preserving machine learning (PPML), which seek to address some limitations associated with these opaque and data-intensive models. Despite brisk research activity in both fields, little attention has been paid to their interaction. This work is the first to thoroughly investigate the effects of privacy-preserving techniques on explanations generated by common XAI methods for DL models. A detailed experimental analysis is conducted to quantify the impact of private training on the explanations provided by DL models, applied to six image datasets and five time series datasets across various domains. The analysis comprises three privacy techniques, nine XAI methods, and seven model architectures. The findings suggest non-negligible changes in explanations through the implementation of privacy measures. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this research marks an initial step toward resolving the challenges that hinder the deployment of AI in safety-critical settings.

摘要

自深度学习(DL)出现以来,该领域不断有创新涌现。然而,这些进展在实际应用中的转化却未能跟上步伐,尤其是在人工智能(AI)必须符合严格监管和道德标准的安全关键领域。可解释人工智能(XAI)和隐私保护机器学习(PPML)的持续研究凸显了这一点,它们旨在解决与这些不透明且数据密集型模型相关的一些局限性。尽管这两个领域的研究活动都很活跃,但它们之间的相互作用却很少受到关注。这项工作首次全面研究了隐私保护技术对深度学习模型常用XAI方法生成的解释的影响。进行了详细的实验分析,以量化隐私训练对深度学习模型提供的解释的影响,这些模型应用于六个图像数据集和五个跨不同领域的时间序列数据集。分析包括三种隐私技术、九种XAI方法和七种模型架构。研究结果表明,通过实施隐私措施,解释会发生不可忽视的变化。除了报告PPML对XAI的个体影响外,本文还针对实际应用中的技术选择给出了明确建议。通过揭示这些关键技术的相互依存关系,本研究朝着解决阻碍人工智能在安全关键环境中部署的挑战迈出了第一步。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7954/11253022/4e9a326fa32b/frai-07-1236947-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验