Rosenblatt Matthew, Rodriguez Raimundo X, Westwater Margaret L, Dai Wei, Horien Corey, Greene Abigail S, Constable R Todd, Noble Stephanie, Scheinost Dustin
Department of Biomedical Engineering, Yale School of Engineering and Applied Science, New Haven, CT 06510, USA.
Interdepartmental Neuroscience Program, Yale School of Medicine, New Haven, CT 06510, USA.
Patterns (N Y). 2023 May 15;4(7):100756. doi: 10.1016/j.patter.2023.100756. eCollection 2023 Jul 14.
Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is "trustworthiness," or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work, we used functional connectomes to explore how minor data manipulations influence machine learning predictions. These manipulations included a method to falsely enhance prediction performance and adversarial noise attacks designed to degrade performance. Although these data manipulations drastically changed model performance, the original and manipulated data were extremely similar ( = 0.99) and did not affect other downstream analysis. Essentially, connectome data could be inconspicuously modified to achieve any desired prediction performance. Overall, our enhancement attacks and evaluation of existing adversarial noise attacks in connectome-based models highlight the need for counter-measures that improve the trustworthiness to preserve the integrity of academic research and any potential translational applications.
基于神经成像的预测模型在性能上不断提升,但这些模型一个普遍被忽视的方面是“可信度”,即对数据操纵的稳健性。高可信度对于研究人员对其发现和解释充满信心至关重要。在这项工作中,我们使用功能连接组来探索微小的数据操纵如何影响机器学习预测。这些操纵包括一种虚假提高预测性能的方法以及旨在降低性能的对抗性噪声攻击。尽管这些数据操纵极大地改变了模型性能,但原始数据和操纵后的数据极其相似(相关系数 = 0.99),并且不影响其他下游分析。从本质上讲,连接组数据可以被不显眼地修改以实现任何期望的预测性能。总体而言,我们在基于连接组的模型中进行的增强攻击以及对现有对抗性噪声攻击的评估凸显了采取对策以提高可信度的必要性,从而维护学术研究以及任何潜在转化应用的完整性。