• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

戈雅:利用生成艺术实现内容与风格解缠

GOYA: Leveraging Generative Art for Content-Style Disentanglement.

作者信息

Wu Yankun, Nakashima Yuta, Garcia Noa

机构信息

Intelligence and Sensing Lab, Osaka University, Suita 565-0871, Osaka, Japan.

出版信息

J Imaging. 2024 Jun 26;10(7):156. doi: 10.3390/jimaging10070156.

DOI:10.3390/jimaging10070156
PMID:39057727
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11278509/
Abstract

The content-style duality is a fundamental element in art. These two dimensions can be easily differentiated by humans: content refers to the objects and concepts in an artwork, and style to the way it looks. Yet, we have not found a way to fully capture this duality with visual representations. While style transfer captures the visual appearance of a single artwork, it fails to generalize to larger sets. Similarly, supervised classification-based methods are impractical since the perception of style lies on a spectrum and not on categorical labels. We thus present , which captures the artistic knowledge of a cutting-edge generative model for disentangling content and style in art. Experiments show that explicitly learns to represent the two artistic dimensions (content and style) of the original artistic image, paving the way for leveraging generative models in art analysis.

摘要

内容风格二元性是艺术中的一个基本要素。人类能够轻易区分这两个维度:内容指艺术品中的对象和概念,而风格则指其外观呈现方式。然而,我们尚未找到一种通过视觉表征来完全捕捉这种二元性的方法。虽然风格迁移能够捕捉单个艺术品的视觉外观,但无法推广到更大的集合。同样,基于监督分类的方法也不切实际,因为风格的感知存在于一个连续谱上,而非类别标签。因此,我们提出了 ,它捕捉了一种前沿生成模型的艺术知识,用于在艺术中解开内容和风格。实验表明, 明确学会了表征原始艺术图像的两个艺术维度(内容和风格),为在艺术分析中利用生成模型铺平了道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/0b793f5d9a07/jimaging-10-00156-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/003790ff61ad/jimaging-10-00156-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/1a379086725e/jimaging-10-00156-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/bedf35d85e5a/jimaging-10-00156-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/84d0037404da/jimaging-10-00156-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/15ab2a05651b/jimaging-10-00156-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/a2c04cf824aa/jimaging-10-00156-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/ac02c6f16e34/jimaging-10-00156-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/c2893f062437/jimaging-10-00156-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/464691c3c4bd/jimaging-10-00156-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/928195eb30f9/jimaging-10-00156-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/0b793f5d9a07/jimaging-10-00156-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/003790ff61ad/jimaging-10-00156-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/1a379086725e/jimaging-10-00156-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/bedf35d85e5a/jimaging-10-00156-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/84d0037404da/jimaging-10-00156-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/15ab2a05651b/jimaging-10-00156-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/a2c04cf824aa/jimaging-10-00156-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/ac02c6f16e34/jimaging-10-00156-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/c2893f062437/jimaging-10-00156-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/464691c3c4bd/jimaging-10-00156-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/928195eb30f9/jimaging-10-00156-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9c9c/11278509/0b793f5d9a07/jimaging-10-00156-g009.jpg

相似文献

1
GOYA: Leveraging Generative Art for Content-Style Disentanglement.戈雅:利用生成艺术实现内容与风格解缠
J Imaging. 2024 Jun 26;10(7):156. doi: 10.3390/jimaging10070156.
2
Conditional generation of medical images via disentangled adversarial inference.通过解缠对抗推理实现医学图像的条件生成。
Med Image Anal. 2021 Aug;72:102106. doi: 10.1016/j.media.2021.102106. Epub 2021 May 24.
3
A study of neural artistic style transfer models and architectures for Indian art styles.对印度艺术风格的神经艺术风格迁移模型和架构的研究。
Network. 2023 Feb-Nov;34(4):282-305. doi: 10.1080/0954898X.2023.2252073. Epub 2023 Sep 5.
4
Statistical image properties predict aesthetic ratings in abstract paintings created by neural style transfer.统计图像属性可预测通过神经风格迁移创作的抽象画的审美评级。
Front Neurosci. 2022 Oct 13;16:999720. doi: 10.3389/fnins.2022.999720. eCollection 2022.
5
Learning Domain-Agnostic Visual Representation for Computational Pathology Using Medically-Irrelevant Style Transfer Augmentation.使用与医学无关的风格迁移增强学习计算病理学的领域不可知视觉表示。
IEEE Trans Med Imaging. 2021 Dec;40(12):3945-3954. doi: 10.1109/TMI.2021.3101985. Epub 2021 Nov 30.
6
CDDSA: Contrastive domain disentanglement and style augmentation for generalizable medical image segmentation.CDDSA:用于可泛化医学图像分割的对比域解缠和风格增强。
Med Image Anal. 2023 Oct;89:102904. doi: 10.1016/j.media.2023.102904. Epub 2023 Jul 18.
7
A Unified Framework for Generalizable Style Transfer: Style and Content Separation.一种通用风格迁移的统一框架:风格与内容分离。
IEEE Trans Image Process. 2020 Jan 31. doi: 10.1109/TIP.2020.2969081.
8
Stain transfer using Generative Adversarial Networks and disentangled features.基于生成对抗网络和去纠缠特征的染色转移。
Comput Biol Med. 2022 Mar;142:105219. doi: 10.1016/j.compbiomed.2022.105219. Epub 2022 Jan 5.
9
CSAST: Content self-supervised and style contrastive learning for arbitrary style transfer.CSAST:用于任意风格迁移的内容自监督和风格对比学习。
Neural Netw. 2023 Jul;164:146-155. doi: 10.1016/j.neunet.2023.04.037. Epub 2023 Apr 26.
10
Predicting the aesthetics of dynamic generative artwork based on statistical image features: A time-dependent model.基于统计图像特征预测动态生成艺术品的美感:一个时变模型。
PLoS One. 2023 Sep 21;18(9):e0291647. doi: 10.1371/journal.pone.0291647. eCollection 2023.

本文引用的文献

1
Synthetic images aid the recognition of human-made art forgeries.合成图像有助于识别人为的艺术赝品。
PLoS One. 2024 Feb 14;19(2):e0295967. doi: 10.1371/journal.pone.0295967. eCollection 2024.
2
KT-GAN: Knowledge-Transfer Generative Adversarial Network for Text-to-Image Synthesis.KT-GAN:用于文本到图像合成的知识转移生成对抗网络。
IEEE Trans Image Process. 2021;30:1275-1290. doi: 10.1109/TIP.2020.3026728. Epub 2020 Dec 23.
3
Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork.用于自然图像和艺术品条件合成的改进型艺术生成对抗网络
IEEE Trans Image Process. 2018 Aug 22. doi: 10.1109/TIP.2018.2866698.