Suppr超能文献

深度动作学习能够实现对各种 CT 和 MRI 图像中身体器官的健壮 3D 分割。

Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images.

机构信息

Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany.

Institute of Medical Engineering, University of Applied Sciences, Würzburg-Schweinfurt, Germany.

出版信息

Sci Rep. 2021 Feb 8;11(1):3311. doi: 10.1038/s41598-021-82370-6.

Abstract

In this study, we propose a novel point cloud based 3D registration and segmentation framework using reinforcement learning. An artificial agent, implemented as a distinct actor based on value networks, is trained to predict the optimal piece-wise linear transformation of a point cloud for the joint tasks of registration and segmentation. The actor network estimates a set of plausible actions and the value network aims to select the optimal action for the current observation. Point-wise features that comprise spatial positions (and surface normal vectors in the case of structured meshes), and their corresponding image features, are used to encode the observation and represent the underlying 3D volume. The actor and value networks are applied iteratively to estimate a sequence of transformations that enable accurate delineation of object boundaries. The proposed approach was extensively evaluated in both segmentation and registration tasks using a variety of challenging clinical datasets. Our method has fewer trainable parameters and lower computational complexity compared to the 3D U-Net, and it is independent of the volume resolution. We show that the proposed method is applicable to mono- and multi-modal segmentation tasks, achieving significant improvements over the state-of-the-art for the latter. The flexibility of the proposed framework is further demonstrated for a multi-modal registration application. As we learn to predict actions rather than a target, the proposed method is more robust compared to the 3D U-Net when dealing with previously unseen datasets, acquired using different protocols or modalities. As a result, the proposed method provides a promising multi-purpose segmentation and registration framework, particular in the context of image-guided interventions.

摘要

在这项研究中,我们提出了一种使用强化学习的基于点云的新型 3D 配准和分割框架。一个人工代理,作为基于值网络的独立演员来实现,被训练来预测点云的最优分段线性变换,用于注册和分割的联合任务。演员网络估计一组合理的动作,值网络旨在为当前观察选择最佳动作。由空间位置(在结构化网格的情况下为表面法向量)和它们对应的图像特征组成的点特征,用于编码观察并表示基础 3D 体积。演员和值网络被迭代应用,以估计一系列变换,从而准确描绘物体边界。所提出的方法在各种具有挑战性的临床数据集上的分割和注册任务中进行了广泛评估。与 3D U-Net 相比,我们的方法具有更少的可训练参数和更低的计算复杂度,并且与体积分辨率无关。我们表明,所提出的方法适用于单模态和多模态分割任务,并且在后者方面取得了显著的改进。所提出的框架的灵活性在多模态注册应用中得到了进一步证明。由于我们学会了预测动作而不是目标,因此与 3D U-Net 相比,当处理以前未见过的数据集时,所提出的方法更具鲁棒性,这些数据集是使用不同的协议或模态采集的。因此,所提出的方法提供了一个有前途的多用途分割和注册框架,特别是在图像引导干预的背景下。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验