Ding Xiruo, Sheng Zhecheng, Hur Brian, Tauscher Justin, Ben-Zeev Dror, Yetişgen Meliha, Pakhomov Serguei, Cohen Trevor
Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, USA.
Department of Pharmaceutical Care & Health Systems, University of Minnesota, Minneapolis, MN, USA.
J Biomed Inform. 2025 Aug;168:104858. doi: 10.1016/j.jbi.2025.104858. Epub 2025 Jun 8.
Multi-institutional datasets are widely used for machine learning from clinical data, to increase dataset size and improve generalization. However, deep learning models in particular may learn to recognize the source of a data element, leading to biased predictions. For example, deep learning models for image recognition trained on chest radiographs with COVID-19 positive and negative examples drawn from different data sources can respond to indicators of provenance (e.g., radiological annotations outside the lung area per institution-specific practices) rather than pathology, generalizing poorly beyond their training data. Bias of this sort, called confounding by provenance, is of concern in natural language processing (NLP) because provenance indicators (e.g., institution-specific section headers, or region-specific dialects) are pervasive in language data. Prior work on addressing such bias has focused on statistical methods, without providing a solution for deep learning models for NLP.
Recent work in representation learning has shown that representing the weights of a trained deep network as task vectors allows for their arithmetic composition to govern model capabilities towards desired behaviors. In this work, we evaluate the extent to which reducing a model's ability to distinguish between contributing sites with such task arithmetic can mitigate confounding by provenance. To do so, we propose two model-agnostic methods, Task Arithmetic for Provenance Effect Reduction (TAPER) and Dominance-Aligned Polarized Provenance Effect Reduction (DAPPER), extending the task vectors approach to a novel problem domain.
Evaluation on three datasets shows improved robustness to confounding by provenance for both RoBERTa and Llama-2 models with the task vector approach, with improved performance at the extremes of distribution shift.
This work emphasizes the importance of adjusting for confounding by provenance, especially in extreme cases of the shift. In use of deep learning models, DAPPER and TAPER show efficiency in mitigating such bias. They provide a novel mitigation strategy for confounding by provenance, with broad applicability to address other sources of bias in composite clinical data sets. Source code is available within the DeconDTN toolkit: https://github.com/LinguisticAnomalies/DeconDTN-toolkit.
多机构数据集被广泛用于从临床数据进行机器学习,以增加数据集规模并提高泛化能力。然而,特别是深度学习模型可能会学会识别数据元素的来源,从而导致有偏差的预测。例如,在胸部X光片上训练的图像识别深度学习模型,其COVID-19阳性和阴性示例取自不同数据源,可能会对来源指标(例如,每个机构特定实践中肺区域外的放射学注释)而非病理学做出反应,在其训练数据之外的泛化能力较差。这种类型的偏差,称为来源混杂,在自然语言处理(NLP)中受到关注,因为来源指标(例如,特定机构的章节标题或特定地区的方言)在语言数据中普遍存在。先前解决此类偏差的工作主要集中在统计方法上,没有为NLP的深度学习模型提供解决方案。
表征学习的最新工作表明,将训练好的深度网络的权重表示为任务向量,可以通过其算术组合来控制模型对期望行为的能力。在这项工作中,我们评估了通过这种任务算术降低模型区分贡献站点的能力在多大程度上可以减轻来源混杂。为此,我们提出了两种与模型无关的方法,即用于减少来源效应的任务算术(TAPER)和优势对齐极化来源效应减少(DAPPER),将任务向量方法扩展到一个新的问题领域。
对三个数据集的评估表明,使用任务向量方法时,RoBERTa和Llama-2模型对来源混杂的鲁棒性得到了提高,在分布转移的极端情况下性能有所改善。
这项工作强调了调整来源混杂的重要性,特别是在转移的极端情况下。在使用深度学习模型时,DAPPER和TAPER在减轻此类偏差方面显示出效率。它们为来源混杂提供了一种新的缓解策略,具有广泛的适用性,可用于解决复合临床数据集中的其他偏差来源。源代码可在DeconDTN工具包中获取:https://github.com/LinguisticAnomalies/DeconDTN-toolkit。