Suppr超能文献

一种基于深度学习的多模态融合模型,用于利用智能手机采集的临床图像和元数据进行皮肤病变诊断。

A deep learning based multimodal fusion model for skin lesion diagnosis using smartphone collected clinical images and metadata.

作者信息

Ou Chubin, Zhou Sitong, Yang Ronghua, Jiang Weili, He Haoyang, Gan Wenjun, Chen Wentao, Qin Xinchi, Luo Wei, Pi Xiaobing, Li Jiehua

机构信息

Clinical Research Institute, The First People's Hospital of Foshan, Foshan, China.

R/D Center, Visionwise Medical Technology, Foshan, China.

出版信息

Front Surg. 2022 Oct 4;9:1029991. doi: 10.3389/fsurg.2022.1029991. eCollection 2022.

Abstract

INTRODUCTION

Skin cancer is one of the most common types of cancer. An accessible tool to the public can help screening for malign lesion. We aimed to develop a deep learning model to classify skin lesion using clinical images and meta information collected from smartphones.

METHODS

A deep neural network was developed with two encoders for extracting information from image data and metadata. A multimodal fusion module with intra-modality self-attention and inter-modality cross-attention was proposed to effectively combine image features and meta features. The model was trained on tested on a public dataset and compared with other state-of-the-art methods using five-fold cross-validation.

RESULTS

Including metadata is shown to significantly improve a model's performance. Our model outperformed other metadata fusion methods in terms of accuracy, balanced accuracy and area under the receiver-operating characteristic curve, with an averaged value of 0.768±0.022, 0.775±0.022 and 0.947±0.007.

CONCLUSION

A deep learning model using smartphone collected images and metadata for skin lesion diagnosis was successfully developed. The proposed model showed promising performance and could be a potential tool for skin cancer screening.

摘要

引言

皮肤癌是最常见的癌症类型之一。一种公众可使用的工具有助于筛查恶性病变。我们旨在开发一种深度学习模型,利用从智能手机收集的临床图像和元信息对皮肤病变进行分类。

方法

开发了一种深度神经网络,带有两个编码器,用于从图像数据和元数据中提取信息。提出了一种具有模态内自注意力和模态间交叉注意力的多模态融合模块,以有效结合图像特征和元特征。该模型在一个公共数据集上进行训练和测试,并使用五折交叉验证与其他先进方法进行比较。

结果

结果表明,纳入元数据可显著提高模型性能。我们的模型在准确率、平衡准确率和受试者工作特征曲线下面积方面优于其他元数据融合方法,其平均值分别为0.768±0.022、0.775±0.022和0.947±0.007。

结论

成功开发了一种利用智能手机收集的图像和元数据进行皮肤病变诊断的深度学习模型。所提出的模型表现出良好的性能,可能成为皮肤癌筛查的潜在工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/832a/9577400/9f2eb42e288e/fsurg-09-1029991-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验