Suppr超能文献

在生成对抗网络(TED-GAN)中使用重尾学生T分布改进皮肤癌分类

Improving Skin Cancer Classification Using Heavy-Tailed Student T-Distribution in Generative Adversarial Networks (TED-GAN).

作者信息

Ahmad Bilal, Jun Sun, Palade Vasile, You Qi, Mao Li, Zhongjie Mao

机构信息

School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.

Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 5FB, UK.

出版信息

Diagnostics (Basel). 2021 Nov 19;11(11):2147. doi: 10.3390/diagnostics11112147.

Abstract

Deep learning has gained immense attention from researchers in medicine, especially in medical imaging. The main bottleneck is the unavailability of sufficiently large medical datasets required for the good performance of deep learning models. This paper proposes a new framework consisting of one variational autoencoder (VAE), two generative adversarial networks, and one auxiliary classifier to artificially generate realistic-looking skin lesion images and improve classification performance. We first train the encoder-decoder network to obtain the latent noise vector with the image manifold's information and let the generative adversarial network sample the input from this informative noise vector in order to generate the skin lesion images. The use of informative noise allows the GAN to avoid mode collapse and creates faster convergence. To improve the diversity in the generated images, we use another GAN with an auxiliary classifier, which samples the noise vector from a heavy-tailed student t-distribution instead of a random noise Gaussian distribution. The proposed framework was named TED-GAN, with T from the t-distribution and ED from the encoder-decoder network which is part of the solution. The proposed framework could be used in a broad range of areas in medical imaging. We used it here to generate skin lesion images and have obtained an improved classification performance on the skin lesion classification task, rising from 66% average accuracy to 92.5%. The results show that TED-GAN has a better impact on the classification task because of its diverse range of generated images due to the use of a heavy-tailed t-distribution.

摘要

深度学习在医学领域,尤其是医学成像领域,受到了研究人员的广泛关注。主要瓶颈在于深度学习模型要取得良好性能所需的足够大的医学数据集难以获取。本文提出了一个新框架,该框架由一个变分自编码器(VAE)、两个生成对抗网络和一个辅助分类器组成,用于人工生成外观逼真的皮肤病变图像并提高分类性能。我们首先训练编码器 - 解码器网络以获得包含图像流形信息的潜在噪声向量,然后让生成对抗网络从这个信息丰富的噪声向量中采样输入,从而生成皮肤病变图像。使用信息丰富的噪声能使生成对抗网络避免模式崩溃并实现更快收敛。为了提高生成图像的多样性,我们使用了另一个带有辅助分类器的生成对抗网络,它从重尾学生t分布而非随机噪声高斯分布中采样噪声向量。所提出的框架被命名为TED - GAN,其中T来自t分布,ED来自作为解决方案一部分的编码器 - 解码器网络。所提出的框架可用于医学成像的广泛领域。我们在此使用它来生成皮肤病变图像,并在皮肤病变分类任务上取得了改进的分类性能,平均准确率从66%提高到了92.5%。结果表明,由于使用了重尾t分布,TED - GAN生成的图像具有多样性,对分类任务有更好的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cab0/8621489/9b5815968907/diagnostics-11-02147-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验