• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用分层风格迁移网络扩展图像风格化的用户控制。

Extending user control for image stylization using hierarchical style transfer networks.

作者信息

Khowaja Sunder Ali, Almakdi Sultan, Memon Muhammad Ali, Khuwaja Parus, Sulaiman Adel, Alqahtani Ali, Shaikh Asadullah, Alghamdi Abdullah

机构信息

Department of Telecommunication, Faculty of Eng. And Tech, University of Sindh, Jamshoro, Sindh, 76090, Pakistan.

Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran, 61441, Najran, Saudi Arabia.

出版信息

Heliyon. 2024 Feb 28;10(5):e27012. doi: 10.1016/j.heliyon.2024.e27012. eCollection 2024 Mar 15.

DOI:10.1016/j.heliyon.2024.e27012
PMID:39669479
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11636866/
Abstract

The field of neural style transfer refers to the re-rendering of content image while fusing the features of a style image. The recent studies either focus on multiple style transfer or arbitrary style transfer while using perceptual and fixpoint content losses in their respective network architectures. The aforementioned losses provide notable stylization results but lack the liberty of style control to the user. Consequently, the stylization results also compromise the preservation of details with respect to the content image. This work proposes the hierarchical style transfer network (HSTN) for the image stylization task that could provide the user with the liberty to control the degree of incurred style via denoising parameter. The HSTN incorporates the proposed fixpoint control loss that preserves details from the content image and the addition of denoising CNN network (DnCNN) and denoising loss for allowing the user to control the level of stylization. The encoder-decoder block, the DnCNN block, and the loss network block make the basic building blocks of HSTN. Extensive experiments have been carried out, and the results are compared with existing works to demonstrate the effectiveness of HSTN. The subjective user evaluation shows that the HSTN's stylization represents the best fusion of style and generates unique stylization results while preserving the content image details, which is evident by acquiring 12% better results than the second-best performing method. It has also been observed that the proposed work is amongst the studies that achieve the best trade-off regarding content and style classification scores, i.e. 37.64% and 60.27%, respectively.

摘要

神经风格迁移领域是指在融合风格图像特征的同时对内容图像进行重新渲染。最近的研究要么专注于多风格迁移,要么专注于任意风格迁移,同时在各自的网络架构中使用感知和定点内容损失。上述损失提供了显著的风格化效果,但缺乏用户对风格控制的自由度。因此,风格化结果在内容图像细节的保留方面也有所妥协。这项工作提出了用于图像风格化任务的分层风格迁移网络(HSTN),它可以通过去噪参数为用户提供控制风格化程度的自由度。HSTN包含了所提出的定点控制损失,该损失可保留内容图像的细节,还添加了去噪卷积神经网络(DnCNN)和去噪损失,以允许用户控制风格化水平。编码器 - 解码器模块、DnCNN模块和损失网络模块构成了HSTN的基本构建块。已经进行了广泛的实验,并将结果与现有工作进行比较,以证明HSTN的有效性。主观用户评估表明,HSTN的风格化代表了风格的最佳融合,并在保留内容图像细节的同时生成独特的风格化结果,这一点很明显,其获得的结果比表现第二好的方法高出12%。还观察到,所提出的工作是在内容和风格分类分数方面实现最佳权衡的研究之一,即分别为37.64%和60.27%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/93e5d6bf9b20/gr005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/b93020dfd2c3/gr001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/80e4246034fa/gr002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/530e8fbcd64f/gr003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/802dc9eab103/gr004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/93e5d6bf9b20/gr005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/b93020dfd2c3/gr001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/80e4246034fa/gr002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/530e8fbcd64f/gr003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/802dc9eab103/gr004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7c2a/11636866/93e5d6bf9b20/gr005.jpg

相似文献

1
Extending user control for image stylization using hierarchical style transfer networks.使用分层风格迁移网络扩展图像风格化的用户控制。
Heliyon. 2024 Feb 28;10(5):e27012. doi: 10.1016/j.heliyon.2024.e27012. eCollection 2024 Mar 15.
2
DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization.DiffStyler:用于文本驱动图像风格化的可控双扩散
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3370-3383. doi: 10.1109/TNNLS.2023.3342645. Epub 2025 Feb 6.
3
UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene.UPST-NeRF:用于3D场景的神经辐射场通用逼真风格迁移
IEEE Trans Vis Comput Graph. 2025 Apr;31(4):2045-2057. doi: 10.1109/TVCG.2024.3378692. Epub 2025 Feb 27.
4
Uncorrelated feature encoding for faster image style transfer.非相关特征编码,实现更快的图像风格迁移。
Neural Netw. 2021 Aug;140:148-157. doi: 10.1016/j.neunet.2021.03.007. Epub 2021 Mar 13.
5
MM-NeRF: Multimodal-Guided 3D Multi-Style Transfer of Neural Radiance Field.MM-NeRF:神经辐射场的多模态引导3D多风格转换
IEEE Trans Vis Comput Graph. 2025 Sep;31(9):5842-5853. doi: 10.1109/TVCG.2024.3476331.
6
Multi-scale feature aggregation and fusion network with self-supervised multi-level perceptual loss for textures preserving low-dose CT denoising.具有自监督多层次感知损失的多尺度特征聚合和融合网络,用于纹理保留的低剂量 CT 去噪。
Phys Med Biol. 2024 Apr 26;69(10). doi: 10.1088/1361-6560/ad3c91.
7
Stylizing Sparse-View 3D Scenes With Hierarchical Neural Representation.
IEEE Trans Vis Comput Graph. 2025 Oct;31(10):7876-7889. doi: 10.1109/TVCG.2025.3558468.
8
Neural Network-Based Mapping Mining of Image Style Transfer in Big Data Systems.基于神经网络的大数据系统中图像风格迁移的映射挖掘。
Comput Intell Neurosci. 2021 Aug 21;2021:8387382. doi: 10.1155/2021/8387382. eCollection 2021.
9
STEDNet: Swin transformer-based encoder-decoder network for noise reduction in low-dose CT.STEDNet:基于 Swin Transformer 的编解码网络,用于降低低剂量 CT 中的噪声。
Med Phys. 2023 Jul;50(7):4443-4458. doi: 10.1002/mp.16249. Epub 2023 Feb 9.
10
Non-Local Representation Based Mutual Affine-Transfer Network for Photorealistic Stylization.
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):7046-7061. doi: 10.1109/TPAMI.2021.3095948. Epub 2022 Sep 14.

本文引用的文献

1
Exploring the Temporal Consistency of Arbitrary Style Transfer: A Channelwise Perspective.
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8482-8496. doi: 10.1109/TNNLS.2022.3230084. Epub 2024 Jun 3.
2
Explicit Filterbank Learning for Neural Image Style Transfer and Image Processing.滤波器组学习在神经图像风格迁移和图像处理中的应用。
IEEE Trans Pattern Anal Mach Intell. 2021 Jul;43(7):2373-2387. doi: 10.1109/TPAMI.2020.2964205. Epub 2021 Jun 9.
3
Neural Style Transfer: A Review.神经风格迁移:综述。
IEEE Trans Vis Comput Graph. 2020 Nov;26(11):3365-3385. doi: 10.1109/TVCG.2019.2921336. Epub 2019 Jun 6.
4
Style Transfer Via Texture Synthesis.纹理合成的样式转换。
IEEE Trans Image Process. 2017 May;26(5):2338-2351. doi: 10.1109/TIP.2017.2678168. Epub 2017 Mar 8.
5
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.超越高斯去噪器:用于图像去噪的深度 CNN 的残差学习。
IEEE Trans Image Process. 2017 Jul;26(7):3142-3155. doi: 10.1109/TIP.2017.2662206. Epub 2017 Feb 1.
6
Contour detection and hierarchical image segmentation.轮廓检测和层次图像分割。
IEEE Trans Pattern Anal Mach Intell. 2011 May;33(5):898-916. doi: 10.1109/TPAMI.2010.161.
7
Image up-sampling using total-variation regularization with a new observation model.使用具有新观测模型的全变差正则化进行图像上采样。
IEEE Trans Image Process. 2005 Oct;14(10):1647-59. doi: 10.1109/tip.2005.851684.