Arafati Arghavan, Morisawa Daisuke, Avendi Michael R, Amini M Reza, Assadi Ramin A, Jafarkhani Hamid, Kheradvar Arash
The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA.
Center for Pervasive Communications and Computing, University of California, 4217 Engineering Hall, Irvine, CA 92697-2700, USA.
J R Soc Interface. 2020 Aug;17(169):20200267. doi: 10.1098/rsif.2020.0267. Epub 2020 Aug 19.
A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers' reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.
将用于超声心动图自动分割的人工智能平台应用于临床面临的一个主要问题是其通用性。本研究介绍并验证了一种基于深度全卷积网络(FCN)和对抗训练的新颖的通用且高效的四腔心视图超声心动图全自动多标签分割方法。我们首次将生成对抗网络用于像素分类训练,这是一种目前尚未用于心脏成像的机器学习新方法,以克服通用性问题。该方法的性能以手动分割作为真值进行了验证。此外,为了与其他现有技术相比验证我们方法的通用性,除了来自CAMUS(用于多结构超声分割的心脏采集)挑战的450例患者的独立数据集外,我们还在我们的数据集上比较了我们方法与一种先进方法的性能。在我们的测试数据集上,左心室(LV)、右心室(RV)、左心房(LA)和右心房(RA)的所有四腔心自动分割的骰子系数分别达到92.1%、86.3%、89.6%和91.4%。左心室容积在自动分割和手动分割之间的舒张末期容积和收缩末期容积的相关性分别为0.94和0.93。与腔室参考轮廓的极佳一致性以及相对于先前基于FCN的方法的显著改进表明,用于像素分类训练的生成对抗网络即使在训练数据数量有限的情况下,也能有效地设计出用于超声心动图四腔心分割的通用全自动基于FCN的网络。