Suppr超能文献

基于自训练的CT三维通用病变检测与标记

3D Universal Lesion Detection and Tagging in CT with Self-Training.

作者信息

Frazier Jared, Mathai Tejas Sudharshan, Liu Jianfei, Paul Angshuman, Summers Ronald M

机构信息

Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda MD, USA.

Indian Institute of Technology, Jodhpur, Rajasthan, India.

出版信息

ArXiv. 2025 Apr 7:arXiv:2504.05201v1.

Abstract

Radiologists routinely perform the tedious task of lesion localization, classification, and size measurement in computed tomography (CT) studies. Universal lesion detection and tagging (ULDT) can simultaneously help alleviate the cumbersome nature of lesion measurement and enable tumor burden assessment. Previous ULDT approaches utilize the publicly available DeepLesion dataset, however it does not provide the full volumetric (3D) extent of lesions and also displays a severe class imbalance. In this work, we propose a self-training pipeline to detect 3D lesions and tag them according to the body part they occur in. We used a significantly limited 30% subset of DeepLesion to train a VFNet model for 2D lesion detection and tagging. Next, the 2D lesion context was expanded into 3D, and the mined 3D lesion proposals were integrated back into the baseline training data in order to retrain the model over multiple rounds. Through the self-training procedure, our VFNet model learned from its own predictions, detected lesions in 3D, and tagged them. Our results indicated that our VFNet model achieved an average sensitivity of 46.9% at [0.125:8] false positives (FP) with a limited 30% data subset in comparison to the 46.8% of an existing approach that used the entire DeepLesion dataset. To our knowledge, we are the first to jointly detect lesions in 3D and tag them according to the body part label.

摘要

放射科医生在计算机断层扫描(CT)研究中经常执行病变定位、分类和大小测量等繁琐任务。通用病变检测与标记(ULDT)可以同时帮助减轻病变测量的繁琐性质,并实现肿瘤负荷评估。以前的ULDT方法使用公开可用的DeepLesion数据集,然而该数据集并未提供病变的完整体积(3D)范围,并且还存在严重的类别不平衡问题。在这项工作中,我们提出了一种自训练流程,用于检测3D病变并根据病变所在的身体部位对其进行标记。我们使用了DeepLesion中仅30%的显著受限子集来训练VFNet模型进行2D病变检测与标记。接下来,将2D病变上下文扩展为3D,并将挖掘出的3D病变提议整合回基线训练数据中,以便在多轮训练中对模型进行重新训练。通过自训练过程,我们的VFNet模型从自身预测中学习,检测3D病变并对其进行标记。我们的结果表明,与使用整个DeepLesion数据集的现有方法的46.8%相比,我们的VFNet模型在[0.125:8]假阳性(FP)情况下,使用仅30%的数据子集时平均灵敏度达到了46.9%。据我们所知,我们是首个联合检测3D病变并根据身体部位标签对其进行标记的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a976/12306825/800fd48e15c4/nihpp-2504.05201v1-f0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验