He Lanting, Luan Lan, Hu Dan
School of Optoelectronics, Beijing Institute of Technology, Beijing, China.
College of Computer and Information Engineering, Guizhou University of Commerce, Guiyang, China.
Front Med (Lausanne). 2025 Jun 2;12:1574514. doi: 10.3389/fmed.2025.1574514. eCollection 2025.
The integration of pathology and radiology through artificial intelligence (AI) represents a groundbreaking advancement in medical imaging, providing a powerful tool for accurate diagnostics and the optimization of clinical workflows. Traditional image classification methods encounter substantial challenges due to the inherent complexity and heterogeneity of medical imaging datasets, which include multi-modal data sources, imbalanced class distributions, and the critical need for interpretability in clinical decision-making.
Addressing these limitations, this study introduces an innovative deep learning-based framework tailored for AI-assisted medical imaging tasks. It incorporates two novel components: the Adaptive Multi-Resolution Imaging Network (AMRI-Net) and the Explainable Domain-Adaptive Learning (EDAL) strategy. AMRI-Net enhances diagnostic accuracy by leveraging multi-resolution feature extraction, attention-guided fusion mechanisms, and task-specific decoders, allowing the model to accurately identify both detailed and overarching patterns across various imaging techniques, such as X-rays, CT, and MRI scans. EDAL significantly improves domain generalizability through advanced domain alignment techniques while integrating uncertainty-aware learning to prioritize high-confidence predictions. It employs attention-based interpretability tools to highlight critical image regions, improving transparency and clinical trust in AI-driven diagnoses.
Experimental results on multi-modal medical imaging datasets underscore the framework's superior performance, with classification accuracies reaching up to 94.95% and F1-Scores up to 94.85%, thereby enhancing transparency and clinical trust in AI-driven diagnoses.
This research bridges the gap between pathology and radiology, offering a comprehensive AI-driven solution that aligns with the evolving demands of modern healthcare by ensuring precision, reliability, and interpretability in medical imaging.
通过人工智能(AI)将病理学与放射学相结合,代表了医学成像领域的一项突破性进展,为准确诊断和优化临床工作流程提供了强大工具。由于医学成像数据集固有的复杂性和异质性,传统的图像分类方法面临重大挑战,这些数据集包括多模态数据源、不均衡的类别分布以及临床决策中对可解释性的迫切需求。
为解决这些局限性,本研究引入了一种专为人工智能辅助医学成像任务量身定制的创新型深度学习框架。它包含两个新颖的组件:自适应多分辨率成像网络(AMRI-Net)和可解释域自适应学习(EDAL)策略。AMRI-Net通过利用多分辨率特征提取、注意力引导融合机制和特定任务解码器来提高诊断准确性,使模型能够准确识别各种成像技术(如X射线、CT和MRI扫描)中的详细和总体模式。EDAL通过先进的域对齐技术显著提高域通用性,同时整合不确定性感知学习以优先进行高置信度预测。它采用基于注意力的可解释性工具来突出关键图像区域,提高人工智能驱动诊断的透明度和临床可信度。
在多模态医学成像数据集上的实验结果突出了该框架的卓越性能,分类准确率高达94.95%,F1分数高达94.85%,从而提高了人工智能驱动诊断的透明度和临床可信度。
本研究弥合了病理学与放射学之间的差距,提供了一个全面的人工智能驱动解决方案,通过确保医学成像中的精确性、可靠性和可解释性,符合现代医疗保健不断变化的需求。