Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA.
Sci Rep. 2024 Mar 25;14(1):7043. doi: 10.1038/s41598-024-56626-w.
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79-100% for Region of Interest (ROI) without location classifications, 73.98-100% for ROI with location classifications, and 78.10-100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts.
全球范围内急性和慢性伤口的负担巨大,这迫切需要改进伤口分类方法,这是诊断和确定最佳治疗方案的关键步骤。鉴于这一需求,我们引入了一种基于深度卷积神经网络的创新多模态网络,用于将伤口分为四类:糖尿病性伤口、压力性伤口、手术性伤口和静脉性溃疡。我们的多模态网络使用伤口图像及其对应的身体部位进行更精确的分类。我们的方法有一个独特之处,即采用身体图谱系统来方便准确地标记伤口位置,这比传统的伤口图像分类技术有所改进。我们方法的一个显著特点是整合了 VGG16、ResNet152 和 EfficientNet 等模型到一个新的架构中。该架构包括空间和通道压缩激励模块、轴向注意力和自适应门控多层感知机等元素,为分类提供了强大的基础。我们的多模态网络在两个不同的数据集上进行了训练和评估,这些数据集包含相关的图像和相应的位置信息。值得注意的是,我们提出的网络在不进行位置分类的情况下,对感兴趣区域(ROI)的分类准确率达到了 74.79-100%,在进行位置分类的情况下,对 ROI 的分类准确率达到了 73.98-100%,对整个图像的分类准确率达到了 78.10-100%,这明显优于文献中报告的先前性能指标。我们的结果表明,我们的多模态网络作为一种有效的伤口图像分类决策支持工具具有潜力,为其在各种临床环境中的应用铺平了道路。