Department of Electronic Engineering, Fudan University, Shanghai, China.
Department of Electronic Engineering, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai, China.
Comput Biol Med. 2019 Aug;111:103356. doi: 10.1016/j.compbiomed.2019.103356. Epub 2019 Jul 12.
Accurate segmentation of the left ventricle (LV) from cine magnetic resonance imaging (MRI) is an important step in the reliable assessment of cardiac function in cardiovascular disease patients. Several deep learning convolutional neural network (CNN) models have achieved state-of-the-art performances for LV segmentation from cine MRI. However, most published deep learning methods use individual cine frames as input and process each frame separately. This approach entirely ignores an important visual clue-the dynamic cardiac motion along the temporal axis, which radiologists observe closely when viewing cine MRI. To imitate the approach of experts, we propose a novel U-net-based method (OF-net) that integrates temporal information from cine MRI into LV segmentation. Our proposed network adds the temporal dimension by incorporating an optical flow (OF) field to capture the cardiac motion. In addition, we introduce two additional modules, a LV localization module and an attention module, that provide improved LV detection and segmentation accuracy, respectively. We evaluated OF-net on the public Cardiac Atlas database with multicenter cine MRI data. The results showed that OF-net achieves an average perpendicular distance (APD) of 0.90±0.08 pixels and a Dice index of 0.95±0.03 for LV segmentation in the middle slices, outperforming the classical U-net model (APD 0.92±0.04 pixels, Dice 0.94±0.16, p < 0.05). Specifically, the proposed method enhances the temporal continuity of segmentation at the apical and basal slices, which are typically more difficult to segment than middle slices. Our work exemplifies the ability of CNN to "learn" from expert experience when applied to specific analysis tasks.
从电影磁共振成像 (MRI) 中准确分割左心室 (LV) 是可靠评估心血管疾病患者心脏功能的重要步骤。一些深度学习卷积神经网络 (CNN) 模型已经在电影 MRI 的 LV 分割方面达到了最先进的性能。然而,大多数已发表的深度学习方法使用单个电影帧作为输入,并分别处理每个帧。这种方法完全忽略了一个重要的视觉线索——沿时间轴的动态心脏运动,放射科医生在观看电影 MRI 时会密切观察到这一点。为了模仿专家的方法,我们提出了一种基于 U 型网络的新方法 (OF-net),该方法将电影 MRI 的时间信息集成到 LV 分割中。我们提出的网络通过引入光流 (OF) 场来增加时间维度,以捕捉心脏运动。此外,我们引入了两个附加模块,即 LV 定位模块和注意力模块,分别提供了改进的 LV 检测和分割准确性。我们在包含多中心电影 MRI 数据的公共心脏图谱数据库上评估了 OF-net。结果表明,OF-net 在中间切片的 LV 分割中平均垂直距离 (APD) 为 0.90±0.08 像素,Dice 指数为 0.95±0.03,优于经典的 U 型网络模型 (APD 0.92±0.04 像素,Dice 0.94±0.16,p<0.05)。具体来说,该方法增强了在顶部和底部切片的分割的时间连续性,这些切片通常比中间切片更难分割。我们的工作证明了在应用于特定分析任务时,CNN 能够“学习”专家经验的能力。