• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过眼动追踪和基于变换器的双向编码器表征的情感分析来理解消费者对人工智能艺术的认知与接受度。

Understanding consumer perception and acceptance of AI art through eye tracking and Bidirectional Encoder Representations from Transformers-based sentiment analysis.

作者信息

Yu Tao, Xu Junping, Pan Younghwan

机构信息

Department of Smart Experience Design Kookmin University, Seoul 02707, Republic of Korea.

出版信息

J Eye Mov Res. 2024 Dec 22;17(5). doi: 10.16910/jemr.17.5.3. eCollection 2024.

DOI:10.16910/jemr.17.5.3
PMID:40270691
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11787909/
Abstract

This study investigates public perception and acceptance of AI-generated art using an integrated system that merges eye-tracking methodologies with advanced bidirectional encoder representations from transformers (BERT)-based sentiment analysis. Eye-tracking methods systematically document the visual trajectories and fixation spots of consumers viewing AI-generated artworks, elucidating the inherent relationship between visual activity and perception. Thereafter, the BERT-based sentiment analysis algorithm extracts emotional responses and aesthetic assessments from numerous internet reviews, offering a robust instrument for evaluating public approval and aesthetic perception. The findings indicate that consumer perception of AI-generated art is markedly affected by visual attention behavior, whereas sentiment analysis uncovers substantial disparities in aesthetic assessments. This paper introduces enhancements to the BERT model via domain-specific pre-training and hyperparameter optimization utilizing deep Gaussian processes and dynamic Bayesian optimization, resulting in substantial increases in classification accuracy and resilience. This study thoroughly examines the underlying mechanisms of public perception and assessment of AI-generated art, assesses the potential of these techniques for practical application in art creation and evaluation, and offers a novel perspective and scientific foundation for future research and application of AI art.

摘要

本研究使用一种集成系统来调查公众对人工智能生成艺术的认知和接受度,该系统将眼动追踪方法与基于变换器(BERT)的高级双向编码器表征的情感分析相结合。眼动追踪方法系统地记录了消费者观看人工智能生成艺术品时的视觉轨迹和注视点,阐明了视觉活动与认知之间的内在关系。此后,基于BERT的情感分析算法从大量网络评论中提取情感反应和审美评估,为评估公众认可度和审美认知提供了一个有力工具。研究结果表明,消费者对人工智能生成艺术的认知明显受到视觉注意力行为的影响,而情感分析揭示了审美评估方面的显著差异。本文通过利用深度高斯过程和动态贝叶斯优化进行特定领域的预训练和超参数优化,对BERT模型进行了改进,从而大幅提高了分类准确率和稳健性。本研究深入探讨了公众对人工智能生成艺术的认知和评估的潜在机制,评估了这些技术在艺术创作和评估中的实际应用潜力,并为人工智能艺术的未来研究和应用提供了新的视角和科学依据。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/9efcd1e2e351/jemr-17-05-c-figure-09.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/8c1ec0b657c8/jemr-17-05-c-figure-01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/dcd6c493080e/jemr-17-05-c-equation-01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/debeff20cbc0/jemr-17-05-c-equation-02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/715af486b4f3/jemr-17-05-c-equation-03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/310f5f0237f6/jemr-17-05-c-equation-04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/97f267f39882/jemr-17-05-c-equation-05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/42e128c37f99/jemr-17-05-c-equation-06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/3b6253bd85d4/jemr-17-05-c-equation-07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/960696daa74e/jemr-17-05-c-equation-08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ab0e284576d4/jemr-17-05-c-equation-09.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/d4749b9036f7/jemr-17-05-c-equation-10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/2c630db73d30/jemr-17-05-c-equation-11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/e34638d3fdfc/jemr-17-05-c-equation-12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ed217c13311c/jemr-17-05-c-equation-13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/307edc267121/jemr-17-05-c-equation-14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/a4483a123db2/jemr-17-05-c-equation-15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/6ac5eff2f9b2/jemr-17-05-c-equation-16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ef83791874e4/jemr-17-05-c-equation-17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/56a4b04f34d3/jemr-17-05-c-equation-18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/477890c77a29/jemr-17-05-c-equation-19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/7df9dac5d36b/jemr-17-05-c-figure-02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/0e06b1085d63/jemr-17-05-c-figure-03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/8e6e3fc25c98/jemr-17-05-c-figure-04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/7f3e03820e10/jemr-17-05-c-equation-20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/233754eca363/jemr-17-05-c-equation-21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ac628bb26876/jemr-17-05-c-figure-05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/396a28dd9727/jemr-17-05-c-figure-06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/d67452105589/jemr-17-05-c-figure-07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/134ced270d09/jemr-17-05-c-figure-08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/9efcd1e2e351/jemr-17-05-c-figure-09.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/8c1ec0b657c8/jemr-17-05-c-figure-01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/dcd6c493080e/jemr-17-05-c-equation-01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/debeff20cbc0/jemr-17-05-c-equation-02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/715af486b4f3/jemr-17-05-c-equation-03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/310f5f0237f6/jemr-17-05-c-equation-04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/97f267f39882/jemr-17-05-c-equation-05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/42e128c37f99/jemr-17-05-c-equation-06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/3b6253bd85d4/jemr-17-05-c-equation-07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/960696daa74e/jemr-17-05-c-equation-08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ab0e284576d4/jemr-17-05-c-equation-09.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/d4749b9036f7/jemr-17-05-c-equation-10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/2c630db73d30/jemr-17-05-c-equation-11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/e34638d3fdfc/jemr-17-05-c-equation-12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ed217c13311c/jemr-17-05-c-equation-13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/307edc267121/jemr-17-05-c-equation-14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/a4483a123db2/jemr-17-05-c-equation-15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/6ac5eff2f9b2/jemr-17-05-c-equation-16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ef83791874e4/jemr-17-05-c-equation-17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/56a4b04f34d3/jemr-17-05-c-equation-18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/477890c77a29/jemr-17-05-c-equation-19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/7df9dac5d36b/jemr-17-05-c-figure-02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/0e06b1085d63/jemr-17-05-c-figure-03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/8e6e3fc25c98/jemr-17-05-c-figure-04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/7f3e03820e10/jemr-17-05-c-equation-20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/233754eca363/jemr-17-05-c-equation-21.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/ac628bb26876/jemr-17-05-c-figure-05.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/396a28dd9727/jemr-17-05-c-figure-06.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/d67452105589/jemr-17-05-c-figure-07.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/134ced270d09/jemr-17-05-c-figure-08.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8db3/11787909/9efcd1e2e351/jemr-17-05-c-figure-09.jpg

相似文献

1
Understanding consumer perception and acceptance of AI art through eye tracking and Bidirectional Encoder Representations from Transformers-based sentiment analysis.通过眼动追踪和基于变换器的双向编码器表征的情感分析来理解消费者对人工智能艺术的认知与接受度。
J Eye Mov Res. 2024 Dec 22;17(5). doi: 10.16910/jemr.17.5.3. eCollection 2024.
2
Vaccine sentiment analysis using BERT + NBSVM and geo-spatial approaches.使用BERT + NBSVM和地理空间方法的疫苗情绪分析。
J Supercomput. 2023 May 7:1-31. doi: 10.1007/s11227-023-05319-8.
3
Transfer Learning for Sentiment Classification Using Bidirectional Encoder Representations from Transformers (BERT) Model.使用来自Transformer的双向编码器表征(BERT)模型进行情感分类的迁移学习
Sensors (Basel). 2023 May 31;23(11):5232. doi: 10.3390/s23115232.
4
Understanding the Perceptions of Healthcare Researchers Regarding ChatGPT: A Study Based on Bidirectional Encoder Representation from Transformers (BERT) Sentiment Analysis and Topic Modeling.了解医疗保健研究人员对ChatGPT的看法:一项基于双向编码器表征变换器(BERT)情感分析和主题建模的研究。
Ann Biomed Eng. 2023 Aug;51(8):1654-1656. doi: 10.1007/s10439-023-03222-0. Epub 2023 May 2.
5
LSTM-DGWO-Based Sentiment Analysis Framework for Analyzing Online Customer Reviews.基于 LSTM-DGWO 的情感分析框架,用于分析在线客户评论。
Comput Intell Neurosci. 2023 Feb 11;2023:6348831. doi: 10.1155/2023/6348831. eCollection 2023.
6
Comparing deep learning architectures for sentiment analysis on drug reviews.比较药物评论情感分析的深度学习架构。
J Biomed Inform. 2020 Oct;110:103539. doi: 10.1016/j.jbi.2020.103539. Epub 2020 Aug 17.
7
Pedagogical sentiment analysis based on the BERT-CNN-BiGRU-attention model in the context of intercultural communication barriers.跨文化交流障碍背景下基于BERT-CNN-BiGRU-注意力模型的教学情感分析
PeerJ Comput Sci. 2024 Jul 3;10:e2166. doi: 10.7717/peerj-cs.2166. eCollection 2024.
8
A BERT Framework to Sentiment Analysis of Tweets.一种用于推文情感分析的BERT框架。
Sensors (Basel). 2023 Jan 2;23(1):506. doi: 10.3390/s23010506.
9
Research on sentiment classification for netizens based on the BERT-BiLSTM-TextCNN model.基于BERT-BiLSTM-TextCNN模型的网民情感分类研究
PeerJ Comput Sci. 2022 Jun 8;8:e1005. doi: 10.7717/peerj-cs.1005. eCollection 2022.
10
Text Sentiment Classification Based on BERT Embedding and Sliced Multi-Head Self-Attention Bi-GRU.基于 BERT 嵌入和切片多头自注意力 Bi-GRU 的文本情感分类
Sensors (Basel). 2023 Jan 28;23(3):1481. doi: 10.3390/s23031481.

本文引用的文献

1
Effect of Action Video Games in Eye Movement Behavior: A Systematic Review.动作电子游戏对眼动行为的影响:一项系统综述。
J Eye Mov Res. 2024 Sep 25;17(3). doi: 10.16910/jemr.17.3.6. eCollection 2024.
2
The Observer's Lens: The Impact of Personality Traits and Gaze on Facial Impression Inferences.观察者的视角:人格特质与注视对面部印象推断的影响
J Eye Mov Res. 2024 Aug 19;17(3). doi: 10.16910/jemr.17.3.5. eCollection 2024.
3
Classification framework to identify similar visual scan paths using multiple similarity metrics.使用多种相似性度量来识别相似视觉扫描路径的分类框架。
J Eye Mov Res. 2024 Aug 9;17(3). doi: 10.16910/jemr.17.3.4. eCollection 2024.
4
The level of skills involved in an observation-based gait analysis.基于观察的步态分析所涉及的技能水平。
J Eye Mov Res. 2024 Sep 25;17(3). doi: 10.16910/jemr.17.3.1. eCollection 2024.
5
Quantifying Dwell Time With Location-based Augmented Reality: Dynamic AOI Analysis on Mobile Eye Tracking Data With Vision Transformer.使用基于位置的增强现实技术量化停留时间:基于视觉Transformer的移动眼动追踪数据动态感兴趣区域分析
J Eye Mov Res. 2024 Apr 29;17(3). doi: 10.16910/jemr.17.3.3. eCollection 2024.
6
Generative artificial intelligence, human creativity, and art.生成式人工智能、人类创造力与艺术。
PNAS Nexus. 2024 Mar 5;3(3):pgae052. doi: 10.1093/pnasnexus/pgae052. eCollection 2024 Mar.
7
Eye Tracking in Optometry: A Systematic Review.验光中的眼动追踪:一项系统综述。
J Eye Mov Res. 2023 Aug 16;16(3). doi: 10.16910/jemr.16.3.3. eCollection 2023.
8
Eyes can tell: Assessment of implicit attitudes toward AI art.眼睛会说话:对人工智能艺术的隐性态度评估。
Iperception. 2023 Oct 30;14(5):20416695231209846. doi: 10.1177/20416695231209846. eCollection 2023 Sep-Oct.
9
Applying machine learning EEG signal classification to emotion‑related brain anticipatory activity.将机器学习脑电图信号分类应用于与情绪相关的大脑预期活动。
F1000Res. 2021 Oct 13;9:173. doi: 10.12688/f1000research.22202.3. eCollection 2020.
10
The Impact of Online Reviews on Consumers' Purchasing Decisions: Evidence From an Eye-Tracking Study.在线评论对消费者购买决策的影响:来自一项眼动追踪研究的证据。
Front Psychol. 2022 Jun 8;13:865702. doi: 10.3389/fpsyg.2022.865702. eCollection 2022.