Suppr超能文献

基于鲁棒学习的医学 X 光片解析与标注。

Robust learning-based parsing and annotation of medical radiographs.

机构信息

Microsoft Health Solutions Group, Chevy Chase, MD 20815, USA.

出版信息

IEEE Trans Med Imaging. 2011 Feb;30(2):338-50. doi: 10.1109/TMI.2010.2077740. Epub 2010 Sep 27.

Abstract

In this paper, we propose a learning-based algorithm for automatic medical image annotation based on robust aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating a very high accuracy ( > 99.9%) for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% (Luo, , 2006). Our approach also achieved the best accuracies for a three-class and a multiclass radiograph annotation task, when compared with other state of the art algorithms. Our algorithm was used to enhance advanced image visualization workflows by enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for identified PA-AP chest images. Finally, we show that the same methodology could be utilized for several image parsing applications including anatomy/organ region of interest prediction and optimized image visualization.

摘要

在本文中,我们提出了一种基于学习的医学图像自动标注算法,该算法基于稳健的学习局部外观线索聚合,能够实现高精度和对严重疾病、成像伪影、遮挡或缺失数据的鲁棒性。该算法首先使用一些地标检测器在整个图像中收集局部外观线索,然后由一组学习的稀疏空间配置模型对这些线索进行验证。在大多数情况下,通过简单地聚合经过验证的检测结果,就可以在这个阶段做出决策。对于其余的情况,我们会采用额外的全局外观过滤步骤来提供补充信息,从而做出最终决策。我们在大规模的胸片视图识别任务中评估了该方法,与最近报道的仅 98.2%的大规模结果(Luo,, 2006)相比,该方法在前后位/前后位(PA-AP)和侧位视图位置识别任务中具有非常高的准确性(>99.9%)。与其他最先进的算法相比,我们的方法在三类和多类胸片标注任务中也取得了最佳的准确性。我们的算法通过实现内容敏感的悬挂协议和自动调用计算机辅助检测算法来增强先进的图像可视化工作流程,从而用于识别的 PA-AP 胸片图像。最后,我们表明,相同的方法可用于包括解剖/器官感兴趣区域预测和优化图像可视化在内的几种图像解析应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验