Suppr超能文献

利用通用模型在X光图像中学习定位跨解剖学地标。

Learning to Localize Cross-Anatomy Landmarks in X-Ray Images with a Universal Model.

作者信息

Zhu Heqin, Yao Qingsong, Xiao Li, Zhou S Kevin

机构信息

Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China.

Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China.

出版信息

BME Front. 2022 Jun 8;2022:9765095. doi: 10.34133/2022/9765095. eCollection 2022.

Abstract

. In this work, we develop a universal anatomical landmark detection model which learns once from multiple datasets corresponding to different anatomical regions. Compared with the conventional model trained on a single dataset, this universal model not only is more light weighted and easier to train but also improves the accuracy of the anatomical landmark location. . The accurate and automatic localization of anatomical landmarks plays an essential role in medical image analysis. However, recent deep learning-based methods only utilize limited data from a single dataset. It is promising and desirable to build a model learned from different regions which harnesses the power of big data. . Our model consists of a local network and a global network, which capture local features and global features, respectively. The local network is a fully convolutional network built up with depth-wise separable convolutions, and the global network uses dilated convolution to enlarge the receptive field to model global dependencies. . We evaluate our model on four 2D X-ray image datasets totaling 1710 images and 72 landmarks in four anatomical regions. Extensive experimental results show that our model improves the detection accuracy compared to the state-of-the-art methods. . Our model makes the first attempt to train a single network on multiple datasets for landmark detection. Experimental results qualitatively and quantitatively show that our proposed model performs better than other models trained on multiple datasets and even better than models trained on a single dataset separately.

摘要

在这项工作中,我们开发了一种通用的解剖标志点检测模型,该模型可以从对应于不同解剖区域的多个数据集中一次性学习。与在单个数据集上训练的传统模型相比,这种通用模型不仅更轻量级且易于训练,还提高了解剖标志点定位的准确性。解剖标志点的准确自动定位在医学图像分析中起着至关重要的作用。然而,最近基于深度学习的方法仅利用了单个数据集中的有限数据。构建一个从不同区域学习并利用大数据力量的模型是有前景且令人期待的。我们的模型由一个局部网络和一个全局网络组成,它们分别捕捉局部特征和全局特征。局部网络是一个由深度可分离卷积构建的全卷积网络,全局网络使用空洞卷积来扩大感受野以建模全局依赖性。我们在四个二维X射线图像数据集上评估我们的模型,这些数据集总共包含1710张图像和四个解剖区域中的72个标志点。大量实验结果表明,与现有最先进的方法相比,我们的模型提高了检测准确率。我们的模型首次尝试在多个数据集上训练单个网络用于标志点检测。实验结果在定性和定量方面都表明,我们提出的模型比在多个数据集上训练的其他模型表现更好,甚至比分别在单个数据集上训练的模型表现更好。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验