Inserm, UMR 1101, Brest F-29200, France.
Inserm, UMR 1101, Brest F-29200, France; Univ Bretagne Occidentale, Brest F-29200, France.
Med Image Anal. 2018 Jul;47:203-218. doi: 10.1016/j.media.2018.05.001. Epub 2018 May 9.
This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of A=0.9961 and A=0.9939, respectively, in offline mode (using past, present and future information), and A=0.9957 and A=0.9936, respectively, in online mode (using past and present information only).
本文研究了手术过程中工具使用的自动监测,其潜在应用包括报告生成、手术培训和实时决策支持。考虑了两种手术:白内障手术,这是最常见的手术程序,和胆囊切除术,这是最常见的消化系统手术之一。通过显微镜(白内障手术)或内窥镜(胆囊切除术)记录的视频来监测工具使用情况。按照最新的视频分析解决方案,视频的每一帧都由卷积神经网络(CNN)进行分析,其输出被馈送到递归神经网络(RNN)中,以考虑事件之间的时间关系。新颖之处在于训练这些 CNN 和 RNN 的方式。计算复杂性阻止了“CNN+RNN”系统的端到端训练。因此,CNN 通常首先独立于 RNN 进行训练。这种方法对于手术工具分析显然是次优的:许多工具彼此非常相似,但它们通常可以根据过去的事件进行区分。CNN 应该经过训练以结合时间上下文提取最有用的视觉特征。提出了一种新的增强策略来实现这一目标:系统的 CNN 和 RNN 部分通过逐步添加弱分类器(CNN 或 RNN)来同时丰富,这些弱分类器经过训练可以提高整体分类准确性。在 50 个白内障手术视频的数据集和 80 个胆囊切除术视频的数据集上进行了实验,其中手动注释了 21 种手术工具的使用情况和 7 种工具的使用情况。在两个数据集上都实现了非常好的分类性能:在离线模式(使用过去、现在和未来的信息)下,工具使用情况可以用平均 ROC 曲线下的面积 A=0.9961 和 A=0.9939 进行标记,在在线模式(仅使用过去和现在的信息)下,分别为 A=0.9957 和 A=0.9936。