Keswani Rajesh N, Byrd Daniel, Garcia Vicente Florencia, Heller J Alex, Klug Matthew, Mazumder Nikhilesh R, Wood Jordan, Yang Anthony D, Etemadi Mozziyar
Digestive Health Center, Northwestern Medicine, Chicago, Illinois, United States.
Department of Anesthesiology, Northwestern Medicine, Chicago, Illinois, United States.
Endosc Int Open. 2021 Feb;9(2):E233-E238. doi: 10.1055/a-1326-1289. Epub 2021 Feb 3.
Storage of full-length endoscopic procedures is becoming increasingly popular. To facilitate large-scale machine learning (ML) focused on clinical outcomes, these videos must be merged with the patient-level data in the electronic health record (EHR). Our aim was to present a method of accurately linking patient-level EHR data with cloud stored colonoscopy videos. This study was conducted at a single academic medical center. Most procedure videos are automatically uploaded to the cloud server but are identified only by procedure time and procedure room. We developed and then tested an algorithm to match recorded videos with corresponding exams in the EHR based upon procedure time and room and subsequently extract frames of interest. Among 28,611 total colonoscopies performed over the study period, 21,170 colonoscopy videos in 20,420 unique patients (54.2 % male, median age 58) were matched to EHR data. Of 100 randomly sampled videos, appropriate matching was manually confirmed in all. In total, these videos represented 489,721 minutes of colonoscopy performed by 50 endoscopists (median 214 colonoscopies per endoscopist). The most common procedure indications were polyp screening (47.3 %), surveillance (28.9 %) and inflammatory bowel disease (9.4 %). From these videos, we extracted procedure highlights (identified by image capture; mean 8.5 per colonoscopy) and surrounding frames. We report the successful merging of a large database of endoscopy videos stored with limited identifiers to rich patient-level data in a highly accurate manner. This technique facilitates the development of ML algorithms based upon relevant patient outcomes.
全长内镜手术视频的存储越来越普遍。为了促进专注于临床结果的大规模机器学习(ML),这些视频必须与电子健康记录(EHR)中的患者层面数据合并。我们的目标是提出一种将患者层面的EHR数据与云端存储的结肠镜检查视频准确关联的方法。本研究在一家学术医疗中心进行。大多数手术视频会自动上传到云服务器,但仅通过手术时间和手术室来识别。我们开发并测试了一种算法,根据手术时间和手术室将录制的视频与EHR中相应的检查进行匹配,随后提取感兴趣的帧。在研究期间进行的28,611例全结肠镜检查中,20,420名独特患者(54.2%为男性,中位年龄58岁)的21,170份结肠镜检查视频与EHR数据相匹配。在随机抽取的100份视频中,全部手动确认了匹配恰当。这些视频总共代表了50名内镜医师进行的489,721分钟结肠镜检查(每位内镜医师的中位数为214例结肠镜检查)。最常见的手术指征是息肉筛查(47.3%)、监测(28.9%)和炎症性肠病(9.4%)。从这些视频中,我们提取了手术亮点(通过图像捕捉识别;每次结肠镜检查平均8.5个)和周围的帧。我们报告了以高度准确的方式成功将存储有有限标识符的大型内镜检查视频数据库与丰富的患者层面数据进行合并。这项技术有助于基于相关患者结果开发ML算法。