IEEE Trans Biomed Eng. 2019 Apr;66(4):1172-1183. doi: 10.1109/TBME.2018.2869432. Epub 2018 Sep 10.
Detail information on objects of interest plays a vital role in current medical diagnosis. However, the existing multimodal sensor fusion methods cause problems of low contrast and color distortion during the process of integration. Therefore, the preservation of detail information in high contrast is worthy of investment in the field of medical image fusion. This paper presents a new multiscale fusion-based framework using the local Laplacian pyramid transform (LLP) and adaptive cloud model (ACM). The proposed framework, LLP+ACM, includes three key modules. First, the input images are decomposed into detail-enhanced approximate and residual images using LLP. Second, ACM is adopted to fuse the approximate images. A salience match tool is used to fuse the residual images. Third, the fused image is reconstructed using the inversed LLP. Experiments show that the proposed LLP+ACM significantly improves detail information with high contrast and reduces the color distortion of the fused images in both subjective and objective evaluations.
感兴趣目标的详细信息在当前的医学诊断中起着至关重要的作用。然而,现有的多模态传感器融合方法在融合过程中会导致对比度低和颜色失真的问题。因此,在医学图像融合领域,值得投资于高对比度下的细节信息保留。本文提出了一种新的基于多尺度融合的框架,使用局部拉普拉斯金字塔变换(LLP)和自适应云模型(ACM)。所提出的框架 LLP+ACM 包括三个关键模块。首先,使用 LLP 将输入图像分解为增强细节的近似图像和残差图像。其次,采用 ACM 来融合近似图像。使用显著度匹配工具来融合残差图像。最后,使用逆 LLP 重建融合图像。实验表明,所提出的 LLP+ACM 显著提高了高对比度下的细节信息,并在主观和客观评估中降低了融合图像的颜色失真。