Department of Radiology, Stanford University, USA.
Department of Pathology, Stanford University, USA.
Med Image Anal. 2025 Jan;99:103356. doi: 10.1016/j.media.2024.103356. Epub 2024 Sep 30.
Breast cancer is a significant global public health concern, with various treatment options available based on tumor characteristics. Pathological examination of excision specimens after surgery provides essential information for treatment decisions. However, the manual selection of representative sections for histological examination is laborious and subjective, leading to potential sampling errors and variability, especially in carcinomas that have been previously treated with chemotherapy. Furthermore, the accurate identification of residual tumors presents significant challenges, emphasizing the need for systematic or assisted methods to address this issue. In order to enable the development of deep-learning algorithms for automated cancer detection on radiology images, it is crucial to perform radiology-pathology registration, which ensures the generation of accurately labeled ground truth data. The alignment of radiology and histopathology images plays a critical role in establishing reliable cancer labels for training deep-learning algorithms on radiology images. However, aligning these images is challenging due to their content and resolution differences, tissue deformation, artifacts, and imprecise correspondence. We present a novel deep learning-based pipeline for the affine registration of faxitron images, the x-ray representations of macrosections of ex-vivo breast tissue, and their corresponding histopathology images of tissue segments. The proposed model combines convolutional neural networks and vision transformers, allowing it to effectively capture both local and global information from the entire tissue macrosection as well as its segments. This integrated approach enables simultaneous registration and stitching of image segments, facilitating segment-to-macrosection registration through a puzzling-based mechanism. To address the limitations of multi-modal ground truth data, we tackle the problem by training the model using synthetic mono-modal data in a weakly supervised manner. The trained model demonstrated successful performance in multi-modal registration, yielding registration results with an average landmark error of 1.51 mm (±2.40), and stitching distance of 1.15 mm (±0.94). The results indicate that the model performs significantly better than existing baselines, including both deep learning-based and iterative models, and it is also approximately 200 times faster than the iterative approach. This work bridges the gap in the current research and clinical workflow and has the potential to improve efficiency and accuracy in breast cancer evaluation and streamline pathology workflow.
乳腺癌是一个重大的全球公共卫生问题,根据肿瘤特征,有多种治疗方案可供选择。手术切除标本的病理检查为治疗决策提供了重要信息。然而,手动选择具有代表性的组织学检查切片是繁琐且主观的,可能导致潜在的采样误差和变异性,尤其是在先前接受过化疗的癌中。此外,准确识别残留肿瘤具有很大的挑战性,强调需要采用系统或辅助方法来解决这个问题。为了使深度学习算法能够在放射学图像上自动进行癌症检测,进行放射病理学配准至关重要,这可以确保生成准确标记的真实数据。放射学和组织病理学图像的配准对于为放射学图像上的深度学习算法生成可靠的癌症标签至关重要。然而,由于其内容和分辨率差异、组织变形、伪影和不精确的对应关系,配准这些图像具有挑战性。我们提出了一种基于深度学习的新方法,用于传真机图像、离体乳房组织的宏观切片的 X 射线表示及其相应的组织学图像段的仿射配准。所提出的模型结合了卷积神经网络和视觉转换器,能够有效地从整个组织宏观切片及其片段中捕获局部和全局信息。这种集成方法可以同时注册和拼接图像段,并通过拼图机制促进段到宏观切片的注册。为了解决多模态真实数据的局限性,我们通过使用弱监督方式在合成单模态数据上训练模型来解决这个问题。该模型在多模态注册中表现出了成功的性能,注册结果的平均标志点误差为 1.51 毫米(±2.40),拼接距离为 1.15 毫米(±0.94)。结果表明,该模型的性能明显优于现有的基线,包括基于深度学习和迭代的模型,并且比迭代方法快约 200 倍。这项工作填补了当前研究和临床工作流程中的空白,有可能提高乳腺癌评估的效率和准确性,并简化病理学工作流程。