• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于艺术设计的图像增强:一种采用CNN-Transformer融合模型的视觉特征方法

Image enhancement with art design: a visual feature approach with a CNN-transformer fusion model.

作者信息

Xu Ming, Cui Jinwei, Ma Xiaoyu, Zou Zhiyi, Xin Zhisheng, Bilal Muhammad

机构信息

School of Architecture and Art, Central South University, Changsha, China.

Department of Pharmaceutical Outcomes and Policy, University of Florida, Gainesville, FL, United States of America.

出版信息

PeerJ Comput Sci. 2024 Nov 13;10:e2417. doi: 10.7717/peerj-cs.2417. eCollection 2024.

DOI:10.7717/peerj-cs.2417
PMID:39650486
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11623052/
Abstract

Graphic design, as a product of the burgeoning new media era, has seen its users' requirements for images continuously evolve. However, external factors such as light and noise often cause graphic design images to become distorted during acquisition. To enhance the definition of these images, this paper introduces a novel image enhancement model based on visual features. Initially, a histogram equalization (HE) algorithm is applied to enhance the graphic design images. Subsequently, image feature extraction is performed using a dual-flow network comprising convolutional neural network (CNN) and Transformer architectures. The CNN employs a residual dense block (RDB) to embed spatial local structure information with varying receptive fields. An improved attention mechanism module, attention feature fusion (AFF), is then introduced to integrate the image features extracted from the dual-flow network. Finally, through image perception quality guided adversarial learning, the model adjusts the initial enhanced image's color and recovers more details. Experimental results demonstrate that the proposed algorithm model achieves enhancement effects exceeding 90% on two large image datasets, which represents a 5%-10% improvement over other models. Furthermore, the algorithm exhibits superior performance in terms of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) image quality evaluation metrics. Our findings indicate that the fusion model significantly enhances image quality, thereby advancing the field of graphic design and showcasing its potential in cultural and creative product design.

摘要

平面设计作为新兴新媒体时代的产物,其用户对图像的要求不断演变。然而,光线和噪声等外部因素常常导致平面设计图像在采集过程中失真。为了提高这些图像的清晰度,本文介绍了一种基于视觉特征的新型图像增强模型。首先,应用直方图均衡化(HE)算法来增强平面设计图像。随后,使用由卷积神经网络(CNN)和Transformer架构组成的双流网络进行图像特征提取。CNN采用残差密集块(RDB)来嵌入具有不同感受野的空间局部结构信息。然后引入一种改进的注意力机制模块,即注意力特征融合(AFF),以整合从双流网络提取的图像特征。最后,通过图像感知质量引导的对抗学习,模型调整初始增强图像的颜色并恢复更多细节。实验结果表明,所提出的算法模型在两个大型图像数据集上实现了超过90%的增强效果,比其他模型提高了5%-10%。此外,该算法在峰值信噪比(PSNR)和结构相似性指数测量(SSIM)图像质量评估指标方面表现出卓越的性能。我们的研究结果表明,融合模型显著提高了图像质量,从而推动了平面设计领域的发展,并展示了其在文化创意产品设计中的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/e9ad46b59eda/peerj-cs-10-2417-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/8b2a9ba4376d/peerj-cs-10-2417-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/a988438504b8/peerj-cs-10-2417-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/e839b5ec4a38/peerj-cs-10-2417-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/978c892ad229/peerj-cs-10-2417-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/3af609e59303/peerj-cs-10-2417-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/4afa4f498e77/peerj-cs-10-2417-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/184385b357b0/peerj-cs-10-2417-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/4d4ba343a2f2/peerj-cs-10-2417-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/e9ad46b59eda/peerj-cs-10-2417-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/8b2a9ba4376d/peerj-cs-10-2417-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/a988438504b8/peerj-cs-10-2417-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/e839b5ec4a38/peerj-cs-10-2417-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/978c892ad229/peerj-cs-10-2417-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/3af609e59303/peerj-cs-10-2417-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/4afa4f498e77/peerj-cs-10-2417-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/184385b357b0/peerj-cs-10-2417-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/4d4ba343a2f2/peerj-cs-10-2417-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fe26/11623052/e9ad46b59eda/peerj-cs-10-2417-g009.jpg

相似文献

1
Image enhancement with art design: a visual feature approach with a CNN-transformer fusion model.基于艺术设计的图像增强:一种采用CNN-Transformer融合模型的视觉特征方法
PeerJ Comput Sci. 2024 Nov 13;10:e2417. doi: 10.7717/peerj-cs.2417. eCollection 2024.
2
Magnetic resonance image denoising for Rician noise using a novel hybrid transformer-CNN network (HTC-net) and self-supervised pretraining.使用新型混合变压器-卷积神经网络(HTC-net)和自监督预训练对莱斯噪声进行磁共振图像去噪
Med Phys. 2025 Mar;52(3):1643-1660. doi: 10.1002/mp.17562. Epub 2024 Dec 6.
3
An enhanced denoising system for mammogram images using deep transformer model with fusion of local and global features.一种使用具有局部和全局特征融合的深度变压器模型的乳腺X光图像增强去噪系统。
Sci Rep. 2025 Feb 24;15(1):6562. doi: 10.1038/s41598-025-89451-w.
4
MAFA-Uformer: Multi-attention and dual-branch feature aggregation U-shaped transformer for sparse-view CT reconstruction.MAFA-Uformer:用于稀疏视图CT重建的多注意力与双分支特征聚合U型变换器
J Xray Sci Technol. 2025 Jan;33(1):157-166. doi: 10.1177/08953996241300016. Epub 2025 Jan 8.
5
MRI super-resolution using similarity distance and multi-scale receptive field based feature fusion GAN and pre-trained slice interpolation network.基于相似距离和多尺度感受野的特征融合生成对抗网络和预训练切片插值网络的 MRI 超分辨率方法。
Magn Reson Imaging. 2024 Jul;110:195-209. doi: 10.1016/j.mri.2024.04.021. Epub 2024 Apr 21.
6
WiTUnet: A U-shaped architecture integrating CNN and Transformer for improved feature alignment and local information fusion.WiTUnet:一种集成卷积神经网络(CNN)和Transformer的U型架构,用于改进特征对齐和局部信息融合。
Sci Rep. 2024 Oct 26;14(1):25525. doi: 10.1038/s41598-024-76886-w.
7
STEDNet: Swin transformer-based encoder-decoder network for noise reduction in low-dose CT.STEDNet:基于 Swin Transformer 的编解码网络,用于降低低剂量 CT 中的噪声。
Med Phys. 2023 Jul;50(7):4443-4458. doi: 10.1002/mp.16249. Epub 2023 Feb 9.
8
A novel denoising method for low-dose CT images based on transformer and CNN.基于Transformer 和 CNN 的低剂量 CT 图像新型去噪方法。
Comput Biol Med. 2023 Sep;163:107162. doi: 10.1016/j.compbiomed.2023.107162. Epub 2023 Jun 8.
9
Texture transformer super-resolution for low-dose computed tomography.纹理变换超分辨率在低剂量 CT 中的应用。
Biomed Phys Eng Express. 2022 Nov 4;8(6). doi: 10.1088/2057-1976/ac9da7.
10
Remote sensing image Super-resolution reconstruction by fusing multi-scale receptive fields and hybrid transformer.基于多尺度感受野融合与混合变压器的遥感图像超分辨率重建
Sci Rep. 2025 Jan 16;15(1):2140. doi: 10.1038/s41598-025-86446-5.

引用本文的文献

1
Dual-Stream Contrastive Latent Learning Generative Adversarial Network for Brain Image Synthesis and Tumor Classification.用于脑图像合成与肿瘤分类的双流对比潜在学习生成对抗网络
J Imaging. 2025 Mar 28;11(4):101. doi: 10.3390/jimaging11040101.

本文引用的文献

1
Tomato Maturity Detection and Counting Model Based on MHSA-YOLOv8.基于 MHSA-YOLOv8 的番茄成熟度检测与计数模型
Sensors (Basel). 2023 Jul 26;23(15):6701. doi: 10.3390/s23156701.
2
Packaging Design Based on Deep Learning and Image Enhancement.基于深度学习和图像增强的包装设计。
Comput Intell Neurosci. 2022 Aug 1;2022:9125234. doi: 10.1155/2022/9125234. eCollection 2022.
3
EnlightenGAN: Deep Light Enhancement Without Paired Supervision.EnlightenGAN:无需配对监督的深度光照增强
IEEE Trans Image Process. 2021;30:2340-2349. doi: 10.1109/TIP.2021.3051462. Epub 2021 Jan 27.
4
Fluoroalkylation of Diazo Compounds with Diverse R Reagents.氟烷基化重氮化合物与各种 R 试剂。
Chem Asian J. 2020 Jun 2;15(11):1660-1677. doi: 10.1002/asia.202000305. Epub 2020 Apr 21.