Jiao Jianbo, Cai Yifan, Alsharid Mohammad, Drukker Lior, Papageorghiou Aris T, Noble J Alison
Department of Engineering Science, University of Oxford, Oxford, UK.
Nuffield Department of Women's & Reproductive Health, University of Oxford, UK.
Med Image Comput Comput Assist Interv. 2020 Oct;12263:534-543. doi: 10.1007/978-3-030-59716-0_51.
In medical imaging, manual annotations can be expensive to acquire and sometimes infeasible to access, making conventional deep learning-based models difficult to scale. As a result, it would be beneficial if useful representations could be derived from raw data without the need for manual annotations. In this paper, we propose to address the problem of self-supervised representation learning with multi-modal ultrasound video-speech raw data. For this case, we assume that there is a high correlation between the ultrasound video and the corresponding narrative speech audio of the sonographer. In order to learn meaningful representations, the model needs to identify such correlation and at the same time understand the underlying anatomical features. We designed a framework to model the correspondence between video and audio without any kind of human annotations. Within this framework, we introduce cross-modal contrastive learning and an affinity-aware self-paced learning scheme to enhance correlation modelling. Experimental evaluations on multi-modal fetal ultrasound video and audio show that the proposed approach is able to learn strong representations and transfers well to downstream tasks of standard plane detection and eye-gaze prediction.
在医学成像中,手动标注获取成本高昂,有时甚至无法获取,这使得传统的基于深度学习的模型难以扩展。因此,如果能够从原始数据中导出有用的表示而无需手动标注,那将是有益的。在本文中,我们提议解决利用多模态超声视频-语音原始数据进行自监督表示学习的问题。对于这种情况,我们假设超声视频与超声医师相应的叙述语音音频之间存在高度相关性。为了学习有意义的表示,模型需要识别这种相关性,同时理解潜在的解剖特征。我们设计了一个框架,用于在没有任何人工标注的情况下对视频和音频之间的对应关系进行建模。在此框架内,我们引入了跨模态对比学习和亲和力感知自步学习方案,以增强相关性建模。对多模态胎儿超声视频和音频的实验评估表明,所提出的方法能够学习到强大的表示,并能很好地迁移到标准平面检测和目光注视预测等下游任务中。