Pei Yuchen, Wang Lisheng, Zhao Fenqiang, Zhong Tao, Liao Lufan, Shen Dinggang, Li Gang
Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China.
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA.
Mach Learn Med Imaging. 2020 Oct;12436:384-393. doi: 10.1007/978-3-030-59861-7_39. Epub 2020 Sep 29.
Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.
胎儿磁共振成像(MRI)受到胎儿运动和母体呼吸的挑战。尽管快速MRI序列允许无伪影地采集单个二维切片,但在切片采集之间通常会出现运动。因此,对每个切片进行运动校正对于三维胎儿脑MRI的重建非常重要,但高度依赖操作员且耗时。基于卷积神经网络(CNN)的方法在预测任意取向的二维切片的三维运动参数方面取得了令人鼓舞的性能,然而,这种方法没有利用重要的脑结构信息。为了解决这个问题,我们提出了一种新的多任务学习框架,以联合学习每个切片的变换参数和组织分割图,从而以粗到细的方式提供脑解剖信息,以指导从二维切片到三维体积空间的映射。在粗阶段,第一个网络学习回归和分割任务共享的特征。在细化阶段,为了充分利用解剖信息,基于粗分割构建的距离图被引入到第二个网络中。最后,结合带符号距离图来一起指导回归和分割,提高了两个任务的性能。实验结果表明,与现有方法相比,所提出的方法在减少运动预测误差和同时获得满意的组织分割结果方面具有卓越的性能。