• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过解缠特征的编码器学习生成任意字体。

Arbitrary Font Generation by Encoder Learning of Disentangled Features.

机构信息

ICVSLab., Department of Electronic Engineering, Yeungnam University, 280 Daehak-ro, Gyeongsan 38541, Gyeongbuk, Korea.

Department of Electrical Engineering, Pohang University of Science and Technology, Pohang 37673, Gyeongbuk, Korea.

出版信息

Sensors (Basel). 2022 Mar 19;22(6):2374. doi: 10.3390/s22062374.

DOI:10.3390/s22062374
PMID:35336547
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8950682/
Abstract

Making a new font requires graphical designs for all base characters, and this designing process consumes lots of time and human resources. Especially for languages including a large number of combinations of consonants and vowels, it is a heavy burden to design all such combinations independently. Automatic font generation methods have been proposed to reduce this labor-intensive design problem. Most of the methods are GAN-based approaches, and they are limited to generate the trained fonts. In some previous methods, they used two encoders, one for content, the other for style, but their disentanglement of content and style is not sufficiently effective in generating arbitrary fonts. Arbitrary font generation is a challenging task because learning text and font design separately from given font images is very difficult, where the font images have both text content and font style in each image. In this paper, we propose a new automatic font generation method to solve this disentanglement problem. First, we use two stacked inputs, i.e., images with the same text but different font style as content input and images with the same font style but different text as style input. Second, we propose new consistency losses that force any combination of encoded features of the stacked inputs to have the same values. In our experiments, we proved that our method can extract consistent features of text contents and font styles by separating content and style encoders and this works well for generating unseen font design from a small number of reference font images that are human-designed. Comparing to the previous methods, the font designs generated with our method showed better quality both qualitatively and quantitatively than those with the previous methods for Korean, Chinese, and English characters. e.g., 17.84 lower FID in unseen font compared to other methods.

摘要

制作一种新字体需要为所有基本字符进行图形设计,而这个设计过程需要消耗大量的时间和人力资源。特别是对于包含大量辅音和元音组合的语言,独立设计所有这些组合是一个沉重的负担。已经提出了自动字体生成方法来减少这种劳动密集型的设计问题。大多数方法都是基于 GAN 的方法,它们仅限于生成训练过的字体。在之前的一些方法中,他们使用了两个编码器,一个用于内容,另一个用于风格,但它们在生成任意字体方面的内容和风格的解耦效果并不充分。任意字体生成是一项具有挑战性的任务,因为从给定的字体图像中分别学习文本和字体设计非常困难,其中字体图像在每个图像中都具有文本内容和字体样式。在本文中,我们提出了一种新的自动字体生成方法来解决这个解耦问题。首先,我们使用两个堆叠的输入,即具有相同文本但不同字体样式的图像作为内容输入,以及具有相同字体样式但不同文本的图像作为样式输入。其次,我们提出了新的一致性损失,迫使堆叠输入的编码特征的任何组合具有相同的值。在我们的实验中,我们证明了我们的方法可以通过分离内容和风格编码器来提取文本内容和字体样式的一致特征,并且对于从少数人类设计的参考字体图像生成看不见的字体设计效果很好。与之前的方法相比,我们的方法生成的字体设计在质量和数量上都优于之前的方法,对于韩文、中文和英文字符,例如,看不见的字体的 FID 低 17.84。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/3b8982bfc5da/sensors-22-02374-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/257a4c631f13/sensors-22-02374-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/24f903f38adb/sensors-22-02374-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/b7ebe0cdae9d/sensors-22-02374-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/7e20100bde7e/sensors-22-02374-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/be4364937339/sensors-22-02374-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/9ee55935f09c/sensors-22-02374-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/17fae389da5b/sensors-22-02374-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/9a6876c6d547/sensors-22-02374-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/0043d3e9ce70/sensors-22-02374-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/bd1c3de3674f/sensors-22-02374-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/8e39a18ac91a/sensors-22-02374-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/216b51e57766/sensors-22-02374-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/69614b4fa1ea/sensors-22-02374-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/e373a1e6f4b1/sensors-22-02374-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/8fc90c3c3b7b/sensors-22-02374-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/3bae7c2003a3/sensors-22-02374-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/3b8982bfc5da/sensors-22-02374-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/257a4c631f13/sensors-22-02374-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/24f903f38adb/sensors-22-02374-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/b7ebe0cdae9d/sensors-22-02374-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/7e20100bde7e/sensors-22-02374-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/be4364937339/sensors-22-02374-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/9ee55935f09c/sensors-22-02374-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/17fae389da5b/sensors-22-02374-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/9a6876c6d547/sensors-22-02374-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/0043d3e9ce70/sensors-22-02374-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/bd1c3de3674f/sensors-22-02374-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/8e39a18ac91a/sensors-22-02374-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/216b51e57766/sensors-22-02374-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/69614b4fa1ea/sensors-22-02374-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/e373a1e6f4b1/sensors-22-02374-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/8fc90c3c3b7b/sensors-22-02374-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/3bae7c2003a3/sensors-22-02374-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bb27/8950682/3b8982bfc5da/sensors-22-02374-g017.jpg

相似文献

1
Arbitrary Font Generation by Encoder Learning of Disentangled Features.通过解缠特征的编码器学习生成任意字体。
Sensors (Basel). 2022 Mar 19;22(6):2374. doi: 10.3390/s22062374.
2
Few-Shot Font Generation With Weakly Supervised Localized Representations.基于弱监督局部表示的少样本字体生成
IEEE Trans Pattern Anal Mach Intell. 2024 Mar;46(3):1479-1495. doi: 10.1109/TPAMI.2022.3196675. Epub 2024 Feb 6.
3
Automatic Generation of Typographic Font From Small Font Subset.从小字体子集自动生成排版字体。
IEEE Comput Graph Appl. 2020 Jan-Feb;40(1):99-111. doi: 10.1109/MCG.2019.2931431. Epub 2019 Jul 31.
4
Design and Implementation of Dongba Character Font Style Transfer Model Based on AFGAN.基于AFGAN的东巴文字体风格迁移模型的设计与实现
Sensors (Basel). 2024 May 26;24(11):3424. doi: 10.3390/s24113424.
5
Learning Implicit Glyph Shape Representation.
IEEE Trans Vis Comput Graph. 2023 Oct;29(10):4172-4182. doi: 10.1109/TVCG.2022.3183400. Epub 2023 Sep 1.
6
A Unified Framework for Generalizable Style Transfer: Style and Content Separation.一种通用风格迁移的统一框架:风格与内容分离。
IEEE Trans Image Process. 2020 Jan 31. doi: 10.1109/TIP.2020.2969081.
7
Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding.利用多模态风格编码的对抗解缠实现由文本和语音驱动的手势动画的零样本风格迁移。
Front Artif Intell. 2023 Jun 12;6:1142997. doi: 10.3389/frai.2023.1142997. eCollection 2023.
8
Conditional generation of medical images via disentangled adversarial inference.通过解缠对抗推理实现医学图像的条件生成。
Med Image Anal. 2021 Aug;72:102106. doi: 10.1016/j.media.2021.102106. Epub 2021 May 24.
9
Toward Exploiting Second-Order Feature Statistics for Arbitrary Image Style Transfer.面向任意图像风格迁移的二阶特征统计量利用
Sensors (Basel). 2022 Mar 29;22(7):2611. doi: 10.3390/s22072611.
10
A New Language-Independent Deep CNN for Scene Text Detection and Style Transfer in Social Media Images.一种用于社交媒体图像场景文本检测和风格迁移的新型独立于语言的深度卷积神经网络。
IEEE Trans Image Process. 2023;32:3552-3566. doi: 10.1109/TIP.2023.3287038. Epub 2023 Jun 29.