Fotis Anastasia, Lalwani Neeraj, Gupta Pankaj, Yee Judy
Radiology, Albert Einstein College of Medicine, Bronx, USA.
Montefiore Medical Center, Bronx, USA.
Abdom Radiol (NY). 2025 Jul 28. doi: 10.1007/s00261-025-05144-y.
AI is rapidly transforming abdominal radiology. This scoping review mapped current applications across segmentation, detection, classification, prediction, and workflow optimization based on 432 studies published between 2019 and 2024. Most studies focused on CT imaging, with fewer involving MRI, ultrasound, or X-ray. Segmentation models (e.g., U-Net) performed well in liver and pancreatic imaging (Dice coefficient 0.65-0.90). Classification models (e.g., ResNet, DenseNet) were commonly used for diagnostic labeling, with reported sensitivities ranging from 52 to 100% and specificities from 40.7 to 99%. A small number of studies employed true object detection models (e.g., YOLOv3, YOLOv7, Mask R-CNN) capable of spatial lesion localization, marking an emerging trend toward localization-based AI. Predictive models demonstrated AUCs between 0.62 and 0.99 but often lacked interpretability and external validation. Workflow optimization studies reported improved efficiency (e.g., reduced report turnaround and scan repetition), though standardized benchmarks were often missing. Major gaps identified include limited real-world validation, underuse of non-CT modalities, and unclear regulatory pathways. Successful clinical integration will require robust validation, practical implementation, and interdisciplinary collaboration.
人工智能正在迅速改变腹部放射学。这项范围综述基于2019年至2024年间发表的432项研究,梳理了当前在分割、检测、分类、预测和工作流程优化方面的应用。大多数研究集中在CT成像上,涉及MRI、超声或X射线的较少。分割模型(如U-Net)在肝脏和胰腺成像中表现良好(Dice系数为0.65 - 0.90)。分类模型(如ResNet、DenseNet)通常用于诊断标记,报告的灵敏度范围为52%至100%,特异性为40.7%至99%。少数研究采用了能够进行空间病变定位的真正目标检测模型(如YOLOv3、YOLOv7、Mask R-CNN),标志着基于定位的人工智能的新兴趋势。预测模型的AUC在0.62至0.99之间,但往往缺乏可解释性和外部验证。工作流程优化研究报告了效率提高(如减少报告周转时间和扫描重复),尽管通常缺少标准化基准。确定的主要差距包括有限的现实世界验证、非CT模态的使用不足以及不明确的监管途径。成功的临床整合将需要强大的验证、实际实施和跨学科合作。