Suppr超能文献

基于深度学习的多模态图像融合疾病检测

Deep learning supported disease detection with multi-modality image fusion.

作者信息

Sangeetha Francelin Vinnarasi F, Daniel Jesline, Anita Rose J T, Pugalenthi R

机构信息

St. Joseph's College of Engineering, OMR, Chennai, India.

出版信息

J Xray Sci Technol. 2021;29(3):411-434. doi: 10.3233/XST-210851.

Abstract

Multi-modal image fusion techniques aid the medical experts in better disease diagnosis by providing adequate complementary information from multi-modal medical images. These techniques enhance the effectiveness of medical disorder analysis and classification of results. This study aims at proposing a novel technique using deep learning for the fusion of multi-modal medical images. The modified 2D Adaptive Bilateral Filters (M-2D-ABF) algorithm is used in the image pre-processing for filtering various types of noises. The contrast and brightness are improved by applying the proposed Energy-based CLAHE algorithm in order to preserve the high energy regions of the multimodal images. Images from two different modalities are first registered using mutual information and then registered images are fused to form a single image. In the proposed fusion scheme, images are fused using Siamese Neural Network and Entropy (SNNE)-based image fusion algorithm. Particularly, the medical images are fused by using Siamese convolutional neural network structure and the entropy of the images. Fusion is done on the basis of score of the SoftMax layer and the entropy of the image. The fused image is segmented using Fast Fuzzy C Means Clustering Algorithm (FFCMC) and Otsu Thresholding. Finally, various features are extracted from the segmented regions. Using the extracted features, classification is done using Logistic Regression classifier. Evaluation is performed using publicly available benchmark dataset. Experimental results using various pairs of multi-modal medical images reveal that the proposed multi-modal image fusion and classification techniques compete the existing state-of-the-art techniques reported in the literature.

摘要

多模态图像融合技术通过从多模态医学图像中提供足够的补充信息,帮助医学专家更好地进行疾病诊断。这些技术提高了医学病症分析和结果分类的有效性。本研究旨在提出一种使用深度学习的多模态医学图像融合新技术。改进的二维自适应双边滤波器(M-2D-ABF)算法用于图像预处理,以过滤各种类型的噪声。通过应用所提出的基于能量的对比度受限自适应直方图均衡化(CLAHE)算法来提高对比度和亮度,以保留多模态图像的高能量区域。首先使用互信息对来自两种不同模态的图像进行配准,然后将配准后的图像融合以形成单个图像。在所提出的融合方案中,使用连体神经网络和基于熵(SNNE)的图像融合算法对图像进行融合。具体而言,通过使用连体卷积神经网络结构和图像的熵来融合医学图像。融合是基于SoftMax层的分数和图像的熵进行的。使用快速模糊C均值聚类算法(FFCMC)和大津阈值法对融合图像进行分割。最后,从分割区域中提取各种特征。利用提取的特征,使用逻辑回归分类器进行分类。使用公开可用的基准数据集进行评估。使用各种多模态医学图像对的实验结果表明,所提出的多模态图像融合和分类技术与文献中报道的现有先进技术相当。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验