Aboian Mariam, Bousabarah Khaled, Kazarian Eve, Zeevi Tal, Holler Wolfgang, Merkaj Sara, Cassinelli Petersen Gabriel, Bahar Ryan, Subramanian Harry, Sunku Pranay, Schrickel Elizabeth, Bhawnani Jitendra, Zawalich Mathew, Mahajan Amit, Malhotra Ajay, Payabvash Sam, Tocino Irena, Lin MingDe, Westerhoff Malte
Department of Radiology and Biomedical Imaging, Brain Tumor Research Group, Yale School of Medicine, Yale University, New Haven, CT, United States.
Visage Imaging, GmbH, Berlin, Germany.
Front Neurosci. 2022 Oct 13;16:860208. doi: 10.3389/fnins.2022.860208. eCollection 2022.
Personalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient's medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction.
An algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations.
UNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study.
Integration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
医学图像的个性化解读对于实现最佳患者护理至关重要,但目前医生可用于实时对患者医学图像进行定量分析的工具极为有限。在本研究中,我们描述了一种PACS内的新型平台,用于图像的容积分析,从而与进行阅片的放射科医生并行开发大型专家注释数据集,这对于开发具有临床意义的人工智能算法至关重要。具体而言,我们实现了一种基于深度学习的算法,用于自动脑肿瘤分割和影像组学特征提取,并将其嵌入PACS,以加速用于图像注释和影像组学特征提取的监督式端到端工作流程。
训练一种算法,用于对来自多机构胶质瘤BraTS 2021数据集的FLAIR图像上的整个原发性脑肿瘤进行分割。使用来自耶鲁纽黑文医疗集团(YHHH)的内部数据集对算法进行验证,并(通过骰子相似系数[DSC])与放射科医生的手动分割进行比较。将UNETR深度学习模型嵌入到Visage 7(美国加利福尼亚州圣地亚哥市Visage Imaging公司)诊断工作站中。自动分割的脑肿瘤便于手动修改。将PyRadiomics(美国马萨诸塞州波士顿市哈佛医学院)原生嵌入到Visage 7中,用于从脑肿瘤分割中提取特征。
UNETR脑肿瘤分割平均耗时4秒,中位数DSC为86%,这与已发表的文献相似,但低于2021年RSNA ASNR MICCAI BRATS挑战赛的结果。最后,在PACS内提取106个影像组学特征平均耗时5.8±0.01秒。提取的影像组学特征不会随提取时间变化,也不会因是在PACS内还是PACS外提取而有所不同。工作流程中具备在放射科医生打开研究之前进行分割和特征提取的能力。在PACS中打开研究后,放射科医生可以验证分割结果,从而对研究进行注释。
将用于肿瘤自动分割和特征提取的图像处理算法集成到PACS中,能够整理大型注释医学图像数据集,并可加速研究成果转化为临床个性化医学应用的开发。使用熟悉的临床工具来修改人工智能分割结果,以及在诊断工作站上原生嵌入分割和影像组学特征提取工具的能力,加速了生成真实数据的过程。