Inserm, LaTIM UMR 1101, 22 rue Camille Desmoulins, Brest 29238, France; Université de Bretagne Occidentale, 3 rue des Archives, Brest 29238, France; IMT Atlantique, Technopôle Brest-Iroise, Brest 29238, France.
Inserm, LaTIM UMR 1101, 22 rue Camille Desmoulins, Brest 29238, France; IMT Atlantique, Technopôle Brest-Iroise, Brest 29238, France.
Med Image Anal. 2021 Jul;71:102083. doi: 10.1016/j.media.2021.102083. Epub 2021 Apr 22.
Breast cancer screening benefits from the visual analysis of multiple views of routine mammograms. As for clinical practice, computer-aided diagnosis (CAD) systems could be enhanced by integrating multi-view information. In this work, we propose a new multi-tasking framework that combines craniocaudal (CC) and mediolateral-oblique (MLO) mammograms for automatic breast mass detection. Rather than addressing mass recognition only, we exploit multi-tasking properties of deep networks to jointly learn mass matching and classification, towards better detection performance. Specifically, we propose a unified Siamese network that combines patch-level mass/non-mass classification and dual-view mass matching to take full advantage of multi-view information. This model is exploited in a full image detection pipeline based on You-Only-Look-Once (YOLO) region proposals. We carry out exhaustive experiments to highlight the contribution of dual-view matching for both patch-level classification and examination-level detection scenarios. Results demonstrate that mass matching highly improves the full-pipeline detection performance by outperforming conventional single-task schemes with 94.78% as Area Under the Curve (AUC) score and a classification accuracy of 0.8791. Interestingly, mass classification also improves the performance of mass matching, which proves the complementarity of both tasks. Our method further guides clinicians by providing accurate dual-view mass correspondences, which suggests that it could act as a relevant second opinion for mammogram interpretation and breast cancer diagnosis.
乳腺癌筛查受益于对常规乳房 X 光照片的多视图的视觉分析。就临床实践而言,通过整合多视图信息,可以增强计算机辅助诊断 (CAD) 系统。在这项工作中,我们提出了一种新的多任务框架,该框架结合了头尾位(CC)和内外斜位(MLO)乳房 X 光片,用于自动检测乳房肿块。我们不仅要解决肿块识别问题,还利用深度网络的多任务特性来共同学习肿块匹配和分类,以提高检测性能。具体来说,我们提出了一个统一的孪生网络,该网络结合了斑块级肿块/非肿块分类和双视图肿块匹配,以充分利用多视图信息。该模型在基于一次只看一次(YOLO)区域建议的全图像检测管道中得到了应用。我们进行了详尽的实验,以突出双视图匹配对斑块级分类和检查级检测场景的贡献。结果表明,肿块匹配通过以 0.8791 的分类准确率和 0.9478 的 AUC 分数超过传统的单任务方案,极大地提高了全管道检测性能。有趣的是,肿块分类也提高了肿块匹配的性能,这证明了这两个任务的互补性。我们的方法通过提供准确的双视图肿块对应关系,进一步为临床医生提供指导,这表明它可以作为乳房 X 光解释和乳腺癌诊断的相关第二意见。