Suppr超能文献

对比预训练可改善临床前模型中心内膜电图的深度学习分类。

Contrastive pretraining improves deep learning classification of endocardial electrograms in a preclinical model.

作者信息

Hunt Bram, Kwan Eugene, Bergquist Jake, Brundage James, Orkild Benjamin, Dong Jiawei, Paccione Eric, Yazaki Kyoichiro, MacLeod Rob S, Dosdall Derek J, Tasdizen Tolga, Ranjan Ravi

机构信息

Department of Biomedical Engineering, University of Utah, Salt Lake City, Utah.

Nora Eccles Harrison Cardiovascular Research and Training Institute, University of Utah, Salt Lake City, Utah.

出版信息

Heart Rhythm O2. 2025 Jan 21;6(4):473-480. doi: 10.1016/j.hroo.2025.01.008. eCollection 2025 Apr.

Abstract

BACKGROUND

Rotors and focal ectopies, or "drivers," are hypothesized mechanisms of persistent atrial fibrillation (AF). Machine learning algorithms have been used to identify these drivers, but the limited size of current driver data sets constrains their performance.

OBJECTIVE

We proposed that pretraining using unsupervised learning on a substantial data set of unlabeled electrograms could enhance classifier accuracy when applied to a smaller driver data set.

METHODS

We used a SimCLR-based framework to pretrain a residual neural network on 113,000 unlabeled 64-electrode measurements from a canine model of AF. The network was then fine-tuned to identify drivers from intracardiac electrograms. Various augmentations, including cropping, Gaussian blurring, and rotation, were applied during pretraining to improve the robustness of the learned representations.

RESULTS

Pretraining significantly improved driver detection accuracy compared with a non-pretrained network (80.8% vs 62.5%). The pretrained network also demonstrated greater resilience to reductions in training data set size, maintaining higher accuracy even with a 30% reduction in data. Gradient-weighted Class Activation Mapping analysis revealed that the network's attention aligned well with manually annotated driver regions, suggesting that the network learned meaningful features for driver detection.

CONCLUSION

This study demonstrates that contrastive pretraining can enhance the accuracy of driver detection algorithms in AF. The findings support the broader application of transfer learning to other electrogram-based tasks, potentially improving outcomes in clinical electrophysiology.

摘要

背景

转子和局灶性异位搏动,即“驱动因素”,被认为是持续性心房颤动(AF)的发病机制。机器学习算法已被用于识别这些驱动因素,但当前驱动因素数据集规模有限,限制了其性能。

目的

我们提出,在大量未标记的心电图数据集上使用无监督学习进行预训练,在应用于较小的驱动因素数据集时可提高分类器的准确性。

方法

我们使用基于SimCLR的框架,在来自犬类房颤模型的113,000个未标记的64电极测量数据上对残差神经网络进行预训练。然后对该网络进行微调,以从心内电图中识别驱动因素。在预训练期间应用了各种增强方法,包括裁剪、高斯模糊和旋转,以提高学习表征的鲁棒性。

结果

与未预训练的网络相比,预训练显著提高了驱动因素检测的准确性(80.8%对62.5%)。预训练的网络在训练数据集规模缩小时也表现出更大的弹性,即使数据减少30%仍能保持较高的准确性。梯度加权类激活映射分析表明,该网络的注意力与手动标注的驱动因素区域匹配良好,这表明该网络学习到了用于驱动因素检测的有意义特征。

结论

本研究表明,对比预训练可提高房颤中驱动因素检测算法的准确性。这些发现支持将迁移学习更广泛地应用于其他基于心电图的任务,可能改善临床电生理学的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15d8/12047512/48c1fa9dd284/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验