Zhang Dongqing, Noble Jack H, Dawant Benoit M
Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
Proc SPIE Int Soc Opt Eng. 2018 Feb;10574. doi: 10.1117/12.2293383. Epub 2018 Mar 2.
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
人工耳蜗(CI)使用通过手术插入耳蜗的电极阵列来刺激神经末梢,以取代自然的机电转换机制,并为重度听力损失患者恢复听力。术后,需要对人工耳蜗进行编程。传统上,这由听力学家完成,他们并不了解电极相对于耳蜗的位置,而是依赖患者对刺激的主观反应。这是一个反复试验的过程,可能会漫长到令人沮丧(进行数十次编程 session 并不罕见)。为了帮助听力学家,我们提出了所谓的IGCIP,即图像引导人工耳蜗编程。在IGCIP中,我们使用图像处理算法在术前CT图像中分割耳蜗内的解剖结构,并在术后CT中定位电极阵列。我们已经表明,由图像衍生信息指导的编程策略显著改善了成人和儿童群体的听力结果。我们现在的目标是在临床上部署这些技术,这需要完全自动化。我们面临的一个挑战是缺乏标准的图像采集协议。因此,我们需要处理的图像体积内容差异很大,目前需要目视检查和标记来初始化处理管道。在这项工作中,我们提出了一种基于深度学习的方法,以自动检测头部CT体积是否包含两只耳朵、一只耳朵或没有耳朵。我们的方法已经在一个包含来自153名患者的2000多个CT体积的数据集上进行了测试,我们实现了总体95.97%的分类准确率。