Department of Biomedical Engineering, University of Arkansas, Fayetteville, Arkansas.
Lasers Surg Med. 2021 Oct;53(8):1086-1095. doi: 10.1002/lsm.23375. Epub 2021 Jan 13.
Histological analysis is a gold standard technique for studying impaired skin wound healing. Label-free multiphoton microscopy (MPM) can provide natural image contrast similar to histological sections and quantitative metabolic information using NADH and FAD autofluorescence. However, MPM analysis requires time-intensive manual segmentation of specific wound tissue regions limiting the practicality and usage of the technology for monitoring wounds. The goal of this study was to train a series of convolutional neural networks (CNNs) to segment MPM images of skin wounds to automate image processing and quantification of wound geometry and metabolism.
STUDY DESIGN/MATERIALS AND METHODS: Two CNNs with a 4-layer U-Net architecture were trained to segment unstained skin wound tissue sections and in vivo z-stacks of the wound edge. The wound section CNN used 380 distinct MPM images while the in vivo CNN used 5,848 with both image sets being randomly distributed to training, validation, and test sets following a 70%, 20%, and 10% split. The accuracy of each network was evaluated on the test set of images, and the effectiveness of automated measurement of wound geometry and optical redox ratio were compared with hand traced outputs of six unstained wound sections and 69 wound edge z-stacks from eight mice.
The MPM wound section CNN had an overall accuracy of 92.83%. Measurements of epidermal/dermal thickness, wound depth, wound width, and % re-epithelialization were within 10% error when evaluated on six full wound sections from days 3, 5, and 10 post-wounding that were not included in the training set. The in vivo wound z-stack CNN had an overall accuracy of 89.66% and was able to isolate the wound edge epithelium in z-stacks from eight mice across post-wound time points to quantify the optical redox ratio within 5% of what was recorded by manual segmentations.
The CNNs trained and presented in this study can accurately segment MPM imaged wound sections and in vivo z-stacks to enable automated and rapid calculation of wound geometry and metabolism. Although MPM is a noninvasive imaging modality well suited to imaging living wound tissue, its use has been limited by time-intensive user segmentation. The use of CNNs for automated image segmentation demonstrate that it is possible for MPM to deliver near real-time quantitative readouts of tissue structure and function. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
组织学分析是研究受损皮肤伤口愈合的金标准技术。无标记多光子显微镜(MPM)可以提供类似于组织学切片的自然图像对比,并使用 NADH 和 FAD 自发荧光进行定量代谢信息。然而,MPM 分析需要对特定的伤口组织区域进行耗时的手动分割,这限制了该技术用于监测伤口的实用性和使用。本研究的目的是训练一系列卷积神经网络(CNN)来分割皮肤伤口的 MPM 图像,以实现图像处理和伤口几何形状和代谢的定量的自动化。
研究设计/材料和方法:使用具有 4 层 U-Net 架构的两个 CNN 对未染色的皮肤伤口组织切片和伤口边缘的体内 z 堆叠进行分割。伤口切片 CNN 使用了 380 个不同的 MPM 图像,而体内 CNN 使用了 5848 个图像,两个图像集都随机分布在训练、验证和测试集中,比例为 70%、20%和 10%。在测试图像集上评估每个网络的准确性,并将自动测量伤口几何形状和光氧化还原比的有效性与来自 8 只小鼠的 6 个未染色伤口切片和 69 个伤口边缘 z 堆叠的手动跟踪输出进行比较。
MPM 伤口切片 CNN 的总体准确率为 92.83%。在评估未包含在训练集中的 3、5 和 10 天伤后 6 个完整伤口切片时,表皮/真皮厚度、伤口深度、伤口宽度和 %再上皮化的测量值误差在 10%以内。体内伤口 z 堆叠 CNN 的总体准确率为 89.66%,能够在 8 只小鼠的整个伤口后时间点的 z 堆叠中分离伤口边缘上皮,从而以手动分割记录的 5%以内的精度量化光氧化还原比。
本文提出的训练和呈现的 CNN 可以准确地分割 MPM 成像的伤口切片和体内 z 堆叠,从而实现伤口几何形状和代谢的自动和快速计算。尽管 MPM 是一种非常适合活体伤口组织成像的非侵入性成像模式,但由于用户分割时间密集,其使用受到限制。CNN 用于自动图像分割证明,MPM 可以实现组织结构和功能的实时定量读数。