Stember Joseph N, Celik Haydar, Gutman David, Swinburne Nathaniel, Young Robert, Eskreis-Winkler Sarah, Holodny Andrei, Jambawalikar Sachin, Wood Bradford J, Chang Peter D, Krupinski Elizabeth, Bagci Ulas
Department of Radiology, Memorial Sloan-Kettering Cancer Center, 1275 York Ave, New York, NY 10065 (J.N.S., D.G., N.S., R.Y., S.E.W., A.H.); The National Institutes of Health Clinical Center, Bethesda, Md (H.C., B.J.W.); Department of Radiology, Columbia University Medical Center, New York, NY (S.J.); Department of Radiology, University of California-Irvine, Irvine, Calif (P.D.C.); Department of Radiology & Imaging Sciences, Emory University, Atlanta, Ga (E.K.); and Center for Research in Computer Vision, University of Central Florida, Orlando, Fla (U.B.).
Radiol Artif Intell. 2020 Nov 11;3(1):e200047. doi: 10.1148/ryai.2020200047. eCollection 2021 Jan.
PURPOSE: To generate and assess an algorithm combining eye tracking and speech recognition to extract brain lesion location labels automatically for deep learning (DL). MATERIALS AND METHODS: In this retrospective study, 700 two-dimensional brain tumor MRI scans from the Brain Tumor Segmentation database were clinically interpreted. For each image, a single radiologist dictated a standard phrase describing the lesion into a microphone, simulating clinical interpretation. Eye-tracking data were recorded simultaneously. Using speech recognition, gaze points corresponding to each lesion were obtained. Lesion locations were used to train a keypoint detection convolutional neural network to find new lesions. A network was trained to localize lesions for an independent test set of 85 images. The statistical measure to evaluate our method was percent accuracy. RESULTS: Eye tracking with speech recognition was 92% accurate in labeling lesion locations from the training dataset, thereby demonstrating that fully simulated interpretation can yield reliable tumor location labels. These labels became those that were used to train the DL network. The detection network trained on these labels predicted lesion location of a separate testing set with 85% accuracy. CONCLUSION: The DL network was able to locate brain tumors on the basis of training data that were labeled automatically from simulated clinical image interpretation.© RSNA, 2020.
目的:生成并评估一种结合眼动追踪和语音识别的算法,以自动提取用于深度学习(DL)的脑病变位置标签。 材料与方法:在这项回顾性研究中,对来自脑肿瘤分割数据库的700例二维脑肿瘤MRI扫描进行了临床解读。对于每幅图像,一名放射科医生对着麦克风口述描述病变的标准短语,模拟临床解读。同时记录眼动追踪数据。利用语音识别,获得与每个病变对应的注视点。病变位置用于训练一个关键点检测卷积神经网络以发现新的病变。训练一个网络对85幅图像的独立测试集进行病变定位。评估我们方法的统计指标是准确率。 结果:眼动追踪结合语音识别在从训练数据集中标记病变位置时的准确率为92%,从而表明完全模拟的解读可以产生可靠的肿瘤位置标签。这些标签成为用于训练DL网络的标签。基于这些标签训练的检测网络对一个单独测试集的病变位置预测准确率为85%。 结论:DL网络能够基于从模拟临床图像解读中自动标记的训练数据来定位脑肿瘤。© RSNA,2020年。
Radiol Artif Intell. 2020-11-11
Front Med (Lausanne). 2022-11-8
J Magn Reson Imaging. 2019-2-27
Proc IEEE Int Symp Biomed Imaging. 2020-4
J Imaging Inform Med. 2024-11-4
IEEE J Biomed Health Inform. 2024-6
Radiol Artif Intell. 2021-10-6
IEEE J Biomed Health Inform. 2020-10
J Digit Imaging. 2019-8
Insights Imaging. 2019-4-4
AMIA Annu Symp Proc. 2018-12-5
Nat Rev Cancer. 2018-8
Can Assoc Radiol J. 2018-4-11