Wang Weiyao, Tamhane Aniruddha, Santos Christine, Rzasa John R, Clark James H, Canares Therese L, Unberath Mathias
Department of Computer Science, Johns Hopkins University School of Engineering, Baltimore, MA, United States.
Department of Pediatric, Johns Hopkins University School of Medicine, Baltimore, MA, United States.
Front Digit Health. 2022 Feb 10;3:810427. doi: 10.3389/fdgth.2021.810427. eCollection 2021.
Ear related concerns and symptoms represent the leading indication for seeking pediatric healthcare attention. Despite the high incidence of such encounters, the diagnostic process of commonly encountered diseases of the middle and external presents a significant challenge. Much of this challenge stems from the lack of cost effective diagnostic testing, which necessitates the presence or absence of ear pathology to be determined clinically. Research has, however, demonstrated considerable variation among clinicians in their ability to accurately diagnose and consequently manage ear pathology. With recent advances in computer vision and machine learning, there is an increasing interest in helping clinicians to accurately diagnose middle and external ear pathology with computer-aided systems. It has been shown that AI has the capacity to analyze a single clinical image captured during the examination of the ear canal and eardrum from which it can determine the likelihood of a pathognomonic pattern for a specific diagnosis being present. The capture of such an image can, however, be challenging especially to inexperienced clinicians. To help mitigate this technical challenge, we have developed and tested a method using video sequences. The videos were collected using a commercially available otoscope smartphone attachment in an urban, tertiary-care pediatric emergency department. We present a two stage method that first, identifies valid frames by detecting and extracting ear drum patches from the video sequence, and second, performs the proposed shift contrastive anomaly detection (SCAD) to flag the otoscopy video sequences as normal or abnormal. Our method achieves an AUROC of 88.0% on the patient level and also outperforms the average of a group of 25 clinicians in a comparative study, which is the largest of such published to date. We conclude that the presented method achieves a promising first step toward the automated analysis of otoscopy video.
耳部相关问题和症状是寻求儿科医疗关注的主要指征。尽管此类就诊情况发生率很高,但中耳和外耳常见疾病的诊断过程面临重大挑战。这一挑战很大程度上源于缺乏经济有效的诊断测试,这使得必须通过临床来确定耳部是否存在病变。然而,研究表明,临床医生在准确诊断并因此管理耳部病变的能力方面存在很大差异。随着计算机视觉和机器学习的最新进展,人们越来越有兴趣借助计算机辅助系统帮助临床医生准确诊断中耳和外耳病变。已经证明,人工智能有能力分析在耳道和鼓膜检查期间捕获的单个临床图像,并据此确定特定诊断的特征性模式出现的可能性。然而,获取这样的图像可能具有挑战性,尤其是对于经验不足的临床医生。为了帮助缓解这一技术挑战,我们开发并测试了一种使用视频序列的方法。这些视频是在一家城市三级护理儿科急诊科使用市售的耳镜智能手机附件收集的。我们提出了一种两阶段方法,首先,通过从视频序列中检测和提取鼓膜补丁来识别有效帧,其次,执行所提出的移位对比异常检测(SCAD),以将耳镜检查视频序列标记为正常或异常。我们的方法在患者层面实现了88.0%的曲线下面积(AUROC),并且在一项比较研究中也优于一组25名临床医生的平均水平,这是迄今为止此类已发表研究中规模最大的。我们得出结论,所提出的方法朝着耳镜检查视频的自动分析迈出了有希望的第一步。