Department of Mathematics and Computer Science, University of Calabria, Rende, Italy.
Department of Information Engineering, Unviersitá Politecnica delle Marche, Ancona, Italy; Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy.
Comput Methods Programs Biomed. 2021 Mar;200:105834. doi: 10.1016/j.cmpb.2020.105834. Epub 2020 Nov 14.
Background and ObjectivesOver the last decade, Deep Learning (DL) has revolutionized data analysis in many areas, including medical imaging. However, there is a bottleneck in the advancement of DL in the surgery field, which can be seen in a shortage of large-scale data, which in turn may be attributed to the lack of a structured and standardized methodology for storing and analyzing surgical images in clinical centres. Furthermore, accurate annotations manually added are expensive and time consuming. A great help can come from the synthesis of artificial images; in this context, in the latest years, the use of Generative Adversarial Neural Networks (GANs) achieved promising results in obtaining photo-realistic images. MethodsIn this study, a method for Minimally Invasive Surgery (MIS) image synthesis is proposed. To this aim, the generative adversarial network pix2pix is trained to generate paired annotated MIS images by transforming rough segmentation of surgical instruments and tissues into realistic images. An additional regularization term was added to the original optimization problem, in order to enhance realism of surgical tools with respect to the background. Results Quantitative and qualitative (i.e., human-based) evaluations of generated images have been carried out in order to assess the effectiveness of the method. ConclusionsExperimental results show that the proposed method is actually able to translate MIS segmentations to realistic MIS images, which can in turn be used to augment existing data sets and help at overcoming the lack of useful images; this allows physicians and algorithms to take advantage from new annotated instances for their training.
背景与目的
在过去的十年中,深度学习(DL)已经彻底改变了许多领域的数据分析,包括医学成像。然而,DL 在手术领域的发展存在瓶颈,这可以从缺乏大规模数据中看出,而这又可能归因于临床中心缺乏用于存储和分析手术图像的结构化和标准化方法。此外,手动添加准确的注释既昂贵又耗时。人工图像的合成可以提供很大的帮助;在这种情况下,近年来,生成对抗神经网络(GAN)在获得逼真图像方面取得了有希望的结果。
方法
本研究提出了一种微创手术(MIS)图像合成方法。为此,训练生成对抗网络 pix2pix 将粗糙的手术器械和组织分割转换为逼真的图像,以生成配对的注释 MIS 图像。在原始优化问题中添加了一个额外的正则化项,以增强手术工具相对于背景的真实感。
结果
为了评估该方法的有效性,对生成的图像进行了定量和定性(即基于人类的)评估。
结论
实验结果表明,所提出的方法实际上能够将 MIS 分割转换为逼真的 MIS 图像,这反过来又可以用于扩充现有数据集并帮助克服缺乏有用图像的问题;这使医生和算法能够从新的注释实例中受益,用于他们的训练。