Liu Yiqing, Shi Huijuan, He Qiming, Fu Yuqiu, Wang Yizhi, He Yonghong, Han Anjia, Guan Tian
Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong, China.
Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China.
Heliyon. 2024 Feb 19;10(4):e26413. doi: 10.1016/j.heliyon.2024.e26413. eCollection 2024 Feb 29.
Identifying the invasive cancer area is a crucial step in the automated diagnosis of digital pathology slices of the breast. When examining the pathological sections of patients with invasive ductal carcinoma, several evaluations are required specifically for the invasive cancer area. However, currently there is little work that can effectively distinguish the invasive cancer area from the ductal carcinoma in situ in whole slide images. To address this issue, we propose a novel architecture named ResMTUnet that combines the strengths of vision transformer and CNN, and uses multi-task learning to achieve accurate invasive carcinoma recognition and segmentation in breast cancer. Furthermore, we introduce a multi-scale input model based on ResMTUnet with conditional random field, named MS-ResMTUNet, to perform segmentation on WSIs. Our systematic experimentation has shown that the proposed network outperforms other competitive methods and effectively segments invasive carcinoma regions in WSIs. This lays a solid foundation for subsequent analysis of breast pathological slides in the future. The code is available at: https://github.com/liuyiqing2018/MS-ResMTUNet.
识别浸润性癌区域是乳腺数字病理切片自动诊断中的关键步骤。在检查浸润性导管癌患者的病理切片时,需要对浸润性癌区域进行几项专门评估。然而,目前在全切片图像中,几乎没有工作能够有效区分浸润性癌区域和原位导管癌。为了解决这个问题,我们提出了一种名为ResMTUnet的新颖架构,它结合了视觉Transformer和卷积神经网络(CNN)的优势,并使用多任务学习来实现乳腺癌中浸润性癌的准确识别和分割。此外,我们引入了一种基于ResMTUnet并带有条件随机场的多尺度输入模型,名为MS - ResMTUNet,用于对全切片图像进行分割。我们的系统实验表明,所提出的网络优于其他竞争方法,并能有效地分割全切片图像中的浸润性癌区域。这为未来乳腺病理切片的后续分析奠定了坚实基础。代码可在以下网址获取:https://github.com/liuyiqing2018/MS - ResMTUNet 。