Chang Che Wei, Lai Feipei, Christian Mesakh, Chen Yu Chun, Hsu Ching, Chen Yo Shen, Chang Dun Hao, Roan Tyng Luen, Yu Yen Che
Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan.
Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan.
JMIR Med Inform. 2021 Dec 2;9(12):e22798. doi: 10.2196/22798.
Accurate assessment of the percentage total body surface area (%TBSA) of burn wounds is crucial in the management of burn patients. The resuscitation fluid and nutritional needs of burn patients, their need for intensive unit care, and probability of mortality are all directly related to %TBSA. It is difficult to estimate a burn area of irregular shape by inspection. Many articles have reported discrepancies in estimating %TBSA by different doctors.
We propose a method, based on deep learning, for burn wound detection, segmentation, and calculation of %TBSA on a pixel-to-pixel basis.
A 2-step procedure was used to convert burn wound diagnosis into %TBSA. In the first step, images of burn wounds were collected from medical records and labeled by burn surgeons, and the data set was then input into 2 deep learning architectures, U-Net and Mask R-CNN, each configured with 2 different backbones, to segment the burn wounds. In the second step, we collected and labeled images of hands to create another data set, which was also input into U-Net and Mask R-CNN to segment the hands. The %TBSA of burn wounds was then calculated by comparing the pixels of mask areas on images of the burn wound and hand of the same patient according to the rule of hand, which states that one's hand accounts for 0.8% of TBSA.
A total of 2591 images of burn wounds were collected and labeled to form the burn wound data set. The data set was randomly split into training, validation, and testing sets in a ratio of 8:1:1. Four hundred images of volar hands were collected and labeled to form the hand data set, which was also split into 3 sets using the same method. For the images of burn wounds, Mask R-CNN with ResNet101 had the best segmentation result with a Dice coefficient (DC) of 0.9496, while U-Net with ResNet101 had a DC of 0.8545. For the hand images, U-Net and Mask R-CNN had similar performance with DC values of 0.9920 and 0.9910, respectively. Lastly, we conducted a test diagnosis in a burn patient. Mask R-CNN with ResNet101 had on average less deviation (0.115% TBSA) from the ground truth than burn surgeons.
This is one of the first studies to diagnose all depths of burn wounds and convert the segmentation results into %TBSA using different deep learning models. We aimed to assist medical staff in estimating burn size more accurately, thereby helping to provide precise care to burn victims.
准确评估烧伤创面的总体表面积百分比(%TBSA)对于烧伤患者的治疗至关重要。烧伤患者的复苏液和营养需求、重症监护需求以及死亡概率均与%TBSA直接相关。通过视诊很难估计不规则形状的烧伤面积。许多文章报道了不同医生在估计%TBSA方面存在差异。
我们提出一种基于深度学习的方法,用于烧伤创面的检测、分割以及逐像素计算%TBSA。
采用两步法将烧伤创面诊断转换为%TBSA。第一步,从病历中收集烧伤创面图像并由烧伤外科医生进行标注,然后将数据集输入到两种深度学习架构U-Net和Mask R-CNN中,每种架构都配置了两种不同的骨干网络,以分割烧伤创面。第二步,我们收集并标注手部图像以创建另一个数据集,该数据集也输入到U-Net和Mask R-CNN中以分割手部。然后根据手部规则,即一个人的手占TBSA的0.8%,通过比较同一患者烧伤创面和手部图像上掩码区域的像素来计算烧伤创面的%TBSA。
共收集并标注了2591张烧伤创面图像以形成烧伤创面数据集。该数据集以8:1:1的比例随机分为训练集、验证集和测试集。收集并标注了400张手掌部图像以形成手部数据集,该数据集也采用相同方法分为3组。对于烧伤创面图像,采用ResNet101的Mask R-CNN分割效果最佳,Dice系数(DC)为0.9496,而采用ResNet101的U-Net的DC为0.8545。对于手部图像,U-Net和Mask R-CNN性能相似,DC值分别为0.9920和0.9910。最后,我们对一名烧伤患者进行了测试诊断。采用ResNet101的Mask R-CNN与真实值相比平均偏差(0.115% TBSA)小于烧伤外科医生。
这是首批使用不同深度学习模型诊断所有深度烧伤创面并将分割结果转换为%TBSA的研究之一。我们旨在帮助医护人员更准确地估计烧伤面积,从而为烧伤患者提供精准护理。