The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, China.
The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, China.
Comput Biol Med. 2024 Nov;182:109171. doi: 10.1016/j.compbiomed.2024.109171. Epub 2024 Oct 2.
Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.
准确评估烧伤的严重程度对于烧伤治疗至关重要。目前,临床医生主要依靠肉眼观察来评估烧伤程度,但这种方法存在明显的观察者间差异。在这项研究中,我们引入了一种使用彩色烧伤创面图像进行自动烧伤严重程度评估的创新分析平台。为此,我们提出了一种新的联合任务深度学习模型,该模型能够同时分割烧伤区域和身体部位,这是计算总体表面积百分比(%TBSA)的两个关键组成部分。我们引入了非对称注意力机制,允许从身体部位分割任务向烧伤区域分割任务提供注意力指导。开发了一个用户友好的移动应用程序,以方便在临床环境中快速评估烧伤严重程度。该框架在一个包含 1340 张现场拍摄的彩色烧伤创面图像的数据集上进行了评估。烧伤深度分割和身体部位分割的平均 Dice 系数分别为 85.12%和 85.36%。%TBSA 评估的 R 为 0.9136。联合任务框架和应用程序的源代码已在 Github 上发布(https://github.com/xjtu-mia/BurnAnalysis)。该平台有望在临床环境中广泛应用,以实现快速、准确的烧伤评估。