Zhao Cheng, Droste Richard, Drukker Lior, Papageorghiou Aris T, Noble J Alison
Institute of Biomedical Engineering, University of Oxford.
Nuffield Department of Women's Reproductive Health, University of Oxford Oxford, United Kingdom.
Med Image Comput Comput Assist Interv. 2021 Sep 21;12908:670-679. doi: 10.1007/978-3-030-87237-3_64.
Automated ultrasound (US)-probe movement guidance is desirable to assist inexperienced human operators during obstetric US scanning. In this paper, we present a new visual-assisted probe movement technique using automated landmark retrieval for assistive obstetric US scanning. In a first step, a set of landmarks is constructed uniformly around a virtual 3D fetal model. Then, during obstetric scanning, a deep neural network (DNN) model locates the nearest landmark through descriptor search between the current observation and landmarks. The global position cues are visualised in real-time on a monitor to assist the human operator in probe movement. A Transformer-VLAD network is proposed to learn a global descriptor to represent each US image. This method abandons the need for deep parameter regression to enhance the generalization ability of the network. To avoid prohibitively expensive human annotation, anchor-positive-negative US image-pairs are automatically constructed through a KD-tree search of 3D probe positions. This leads to an end-to-end network trained in a self-supervised way through contrastive learning.
自动超声(US)探头移动引导有助于在产科超声扫描过程中协助经验不足的操作人员。在本文中,我们提出了一种新的视觉辅助探头移动技术,该技术使用自动地标检索来辅助产科超声扫描。第一步,在虚拟三维胎儿模型周围均匀构建一组地标。然后,在产科扫描过程中,深度神经网络(DNN)模型通过当前观察结果与地标之间的描述符搜索来定位最近的地标。全局位置线索实时显示在监视器上,以协助操作人员进行探头移动。我们提出了一种Transformer-VLAD网络来学习全局描述符,以表示每个超声图像。该方法无需深度参数回归,从而增强了网络的泛化能力。为避免代价过高的人工标注,通过对三维探头位置进行KD树搜索,自动构建锚定正-负超声图像对。这导致了一个通过对比学习以自监督方式训练的端到端网络。