School of Computing and Engineering, University of West London, London, United Kingdom.
National Heart and Lung Institute, Imperial College, London, United Kingdom.
Comput Biol Med. 2021 Jun;133:104373. doi: 10.1016/j.compbiomed.2021.104373. Epub 2021 Apr 6.
Accurate identification of end-diastolic and end-systolic frames in echocardiographic cine loops is important, yet challenging, for human experts. Manual frame selection is subject to uncertainty, affecting crucial clinical measurements, such as myocardial strain. Therefore, the ability to automatically detect frames of interest is highly desirable.
We have developed deep neural networks, trained and tested on multi-centre patient data, for the accurate identification of end-diastolic and end-systolic frames in apical four-chamber 2D multibeat cine loop recordings of arbitrary length. Seven experienced cardiologist experts independently labelled the frames of interest, thereby providing infallible annotations, allowing for observer variability measurements.
When compared with the ground-truth, our model shows an average frame difference of -0.09 ± 1.10 and 0.11 ± 1.29 frames for end-diastolic and end-systolic frames, respectively. When applied to patient datasets from a different clinical site, to which the model was blind during its development, average frame differences of -1.34 ± 3.27 and -0.31 ± 3.37 frames were obtained for both frames of interest. All detection errors fall within the range of inter-observer variability: [-0.87, -5.51]±[2.29, 4.26] and [-0.97, -3.46]±[3.67, 4.68] for ED and ES events, respectively.
The proposed automated model can identify multiple end-systolic and end-diastolic frames in echocardiographic videos of arbitrary length with performance indistinguishable from that of human experts, but with significantly shorter processing time.
在超声心动图电影循环中准确识别舒张末期和收缩末期帧对于人类专家来说很重要,但也具有挑战性。手动帧选择存在不确定性,会影响到心肌应变等关键临床测量。因此,自动检测感兴趣的帧的能力是非常需要的。
我们已经开发了深度神经网络,在多中心患者数据上进行了训练和测试,用于准确识别任意长度的 apical 四腔 2D 多拍电影循环记录中的舒张末期和收缩末期帧。七位有经验的心脏病专家独立标记了感兴趣的帧,从而提供了可靠的注释,可用于测量观察者的变异性。
与真实值相比,我们的模型在舒张末期和收缩末期帧的平均帧差异分别为-0.09±1.10 和 0.11±1.29 帧。当应用于来自不同临床地点的患者数据集时,模型在开发过程中对其是盲的,对于这两个感兴趣的帧,平均帧差异分别为-1.34±3.27 和-0.31±3.37 帧。所有检测错误都在观察者变异性范围内:ED 事件为[-0.87,-5.51]±[2.29,4.26],ES 事件为[-0.97,-3.46]±[3.67,4.68]。
所提出的自动模型可以识别任意长度的超声心动图视频中的多个收缩末期和舒张末期帧,其性能与人类专家相当,但处理时间明显更短。