• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对人工神经网络“深度梦境”生成的视觉刺激的人类情绪反应评估。

Assessment of human emotional reactions to visual stimuli "deep-dreamed" by artificial neural networks.

作者信息

Marczak-Czajka Agnieszka, Redgrave Timothy, Mitcheff Mahsa, Villano Michael, Czajka Adam

机构信息

Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States.

Department of Psychology, University of Notre Dame, Notre Dame, IN, United States.

出版信息

Front Psychol. 2024 Dec 24;15:1509392. doi: 10.3389/fpsyg.2024.1509392. eCollection 2024.

DOI:10.3389/fpsyg.2024.1509392
PMID:39776961
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11703666/
Abstract

INTRODUCTION

While the fact that visual stimuli synthesized by Artificial Neural Networks (ANN) may evoke emotional reactions is documented, the precise mechanisms that connect the strength and type of such reactions with the ways of how ANNs are used to synthesize visual stimuli are yet to be discovered. Understanding these mechanisms allows for designing methods that synthesize images attenuating or enhancing selected emotional states, which may provide unobtrusive and widely-applicable treatment of mental dysfunctions and disorders.

METHODS

The Convolutional Neural Network (CNN), a type of ANN used in computer vision tasks which models the ways humans solve visual tasks, was applied to synthesize ("dream" or "hallucinate") images with no semantic content to maximize activations of neurons in precisely-selected layers in the CNN. The evoked emotions of 150 human subjects observing these images were self-reported on a two-dimensional scale (arousal and valence) utilizing self-assessment manikin (SAM) figures. Correlations between arousal and valence values and image visual properties (e.g., color, brightness, clutter feature congestion, and clutter sub-band entropy) as well as the position of the CNN's layers stimulated to obtain a given image were calculated.

RESULTS

Synthesized images that maximized activations of some of the CNN layers led to significantly higher or lower arousal and valence levels compared to average subject's reactions. Multiple linear regression analysis found that a small set of selected image global visual features (hue, feature congestion, and sub-band entropy) are significant predictors of the measured arousal, however no statistically significant dependencies were found between image global visual features and the measured valence.

CONCLUSION

This study demonstrates that the specific method of synthesizing images by maximizing small and precisely-selected parts of the CNN used in this work may lead to synthesis of visual stimuli that enhance or attenuate emotional reactions. This method paves the way for developing tools that stimulate, in a non-invasive way, to support wellbeing (manage stress, enhance mood) and to assist patients with certain mental conditions by complementing traditional methods of therapeutic interventions.

摘要

引言

虽然有文献记载人工神经网络(ANN)合成的视觉刺激可能会引发情绪反应,但将此类反应的强度和类型与ANN用于合成视觉刺激的方式联系起来的精确机制尚未被发现。了解这些机制有助于设计出能够合成图像以减弱或增强特定情绪状态的方法,这可能为精神功能障碍和紊乱提供不引人注意且广泛适用的治疗方法。

方法

卷积神经网络(CNN)是一种用于计算机视觉任务的ANN,它模拟人类解决视觉任务的方式,被用于合成(“做梦”或“产生幻觉”)无语义内容的图像,以最大限度地激活CNN中精确选择层的神经元。150名观察这些图像的人类受试者所引发的情绪通过使用自我评估人体模型(SAM)在二维尺度(唤醒度和效价)上进行自我报告。计算了唤醒度和效价值与图像视觉属性(如颜色、亮度、杂波特征拥塞和杂波子带熵)以及为获得给定图像而被刺激的CNN层的位置之间的相关性。

结果

与受试者的平均反应相比,最大化某些CNN层激活的合成图像导致了显著更高或更低的唤醒度和效价水平。多元线性回归分析发现,一小部分选定的图像全局视觉特征(色调、特征拥塞和子带熵)是所测量唤醒度的重要预测指标,然而在图像全局视觉特征与所测量的效价之间未发现统计学上的显著相关性。

结论

本研究表明,通过最大化本工作中使用的CNN的小且精确选择的部分来合成图像的特定方法可能会导致合成出增强或减弱情绪反应的视觉刺激。这种方法为开发以非侵入性方式刺激以支持幸福感(管理压力、改善情绪)并通过补充传统治疗干预方法来协助患有某些精神疾病的患者的工具铺平了道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/8c992c5ff517/fpsyg-15-1509392-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/e3664b3dfa26/fpsyg-15-1509392-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/23c92c8bc607/fpsyg-15-1509392-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/20dd97783e06/fpsyg-15-1509392-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/9855ecf5ba8b/fpsyg-15-1509392-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/aeafc9a198cc/fpsyg-15-1509392-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/eb9092055258/fpsyg-15-1509392-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/9cc8b5bb02f3/fpsyg-15-1509392-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/c6891747edae/fpsyg-15-1509392-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/2e56b569c698/fpsyg-15-1509392-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/79cf373c0e32/fpsyg-15-1509392-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/e2b73f869d58/fpsyg-15-1509392-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/f1779757b2a7/fpsyg-15-1509392-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/8c992c5ff517/fpsyg-15-1509392-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/e3664b3dfa26/fpsyg-15-1509392-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/23c92c8bc607/fpsyg-15-1509392-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/20dd97783e06/fpsyg-15-1509392-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/9855ecf5ba8b/fpsyg-15-1509392-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/aeafc9a198cc/fpsyg-15-1509392-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/eb9092055258/fpsyg-15-1509392-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/9cc8b5bb02f3/fpsyg-15-1509392-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/c6891747edae/fpsyg-15-1509392-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/2e56b569c698/fpsyg-15-1509392-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/79cf373c0e32/fpsyg-15-1509392-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/e2b73f869d58/fpsyg-15-1509392-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/f1779757b2a7/fpsyg-15-1509392-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/24b5/11703666/8c992c5ff517/fpsyg-15-1509392-g0013.jpg

相似文献

1
Assessment of human emotional reactions to visual stimuli "deep-dreamed" by artificial neural networks.对人工神经网络“深度梦境”生成的视觉刺激的人类情绪反应评估。
Front Psychol. 2024 Dec 24;15:1509392. doi: 10.3389/fpsyg.2024.1509392. eCollection 2024.
2
Deep neural network predicts emotional responses of the human brain from functional magnetic resonance imaging.深度神经网络从功能磁共振成像预测人类大脑的情绪反应。
Neuroimage. 2019 Feb 1;186:607-627. doi: 10.1016/j.neuroimage.2018.10.054. Epub 2018 Oct 23.
3
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
4
Predicting the Arousal and Valence Values of Emotional States Using Learned, Predesigned, and Deep Visual Features.使用习得的、预先设计的和深度视觉特征预测情绪状态的唤醒值和效价值。
Sensors (Basel). 2024 Jul 7;24(13):4398. doi: 10.3390/s24134398.
5
Elicited emotion: effects of inoculation of an art style on emotionally strong images.诱发情绪:一种艺术风格的植入对情感强烈图像的影响。
Exp Brain Res. 2025 Mar 12;243(4):89. doi: 10.1007/s00221-025-07030-x.
6
Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks.大规模、高分辨率的人类、猴子和最先进的深度人工神经网络核心视觉对象识别行为比较。
J Neurosci. 2018 Aug 15;38(33):7255-7269. doi: 10.1523/JNEUROSCI.0388-18.2018. Epub 2018 Jul 13.
7
Recognizing Image Semantic Information Through Multi-Feature Fusion and SSAE-Based Deep Network.通过多特征融合和基于 SSAE 的深度网络识别图像语义信息。
J Med Syst. 2020 Jan 3;44(2):46. doi: 10.1007/s10916-019-1498-8.
8
CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis.基于 CNN-XGBoost 融合的脑电频谱图图像分析情感状态识别。
Sci Rep. 2022 Aug 19;12(1):14122. doi: 10.1038/s41598-022-18257-x.
9
Emotional stimuli candidates for behavioural intervention in the prevention of early childhood caries: a pilot study.情绪刺激物作为预防幼儿龋病的行为干预候选物:一项初步研究。
BMC Oral Health. 2019 Feb 18;19(1):33. doi: 10.1186/s12903-019-0718-4.
10
Stimuli-Aware Visual Emotion Analysis.**刺激感知**视觉情感分析。
IEEE Trans Image Process. 2021;30:7432-7445. doi: 10.1109/TIP.2021.3106813. Epub 2021 Sep 1.

本文引用的文献

1
Perceptography unveils the causal contribution of inferior temporal cortex to visual perception.颞下回皮层对视觉感知的因果贡献被感知描绘论揭示了。
Nat Commun. 2024 Apr 18;15(1):3347. doi: 10.1038/s41467-024-47356-8.
2
Effects of environmental colours in virtual reality: Physiological arousal affected by lightness and hue.虚拟现实中环境颜色的影响:亮度和色调对生理唤醒的影响。
R Soc Open Sci. 2023 Oct 11;10(10):230432. doi: 10.1098/rsos.230432. eCollection 2023 Oct.
3
A Systematic Review of International Affective Picture System (IAPS) around the World.
国际情感图片系统(IAPS)在世界范围内的系统评价。
Sensors (Basel). 2023 Apr 10;23(8):3866. doi: 10.3390/s23083866.
4
A Comprehensive Overview of the Role of Visual Cortex Malfunction in Depressive Disorders: Opportunities and Challenges.全面概述视皮层功能障碍在抑郁障碍中的作用:机遇与挑战。
Neurosci Bull. 2023 Sep;39(9):1426-1438. doi: 10.1007/s12264-023-01052-7. Epub 2023 Mar 30.
5
A deep learning framework for neuroscience.深度学习在神经科学中的应用框架。
Nat Neurosci. 2019 Nov;22(11):1761-1770. doi: 10.1038/s41593-019-0520-2. Epub 2019 Oct 28.
6
Neural population control via deep image synthesis.通过深度图像合成实现神经群体控制。
Science. 2019 May 3;364(6439). doi: 10.1126/science.aav9436.
7
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.视觉复杂性与情感:评分反映的内容比表面看到的更多。
Front Psychol. 2018 Jan 18;8:2368. doi: 10.3389/fpsyg.2017.02368. eCollection 2017.
8
Development and validation of Image Stimuli for Emotion Elicitation (ISEE): A novel affective pictorial system with test-retest repeatability.开发和验证情绪诱发图像刺激 (ISEE):具有测试-重测可重复性的新型情感图片系统。
Psychiatry Res. 2018 Mar;261:414-420. doi: 10.1016/j.psychres.2017.12.068. Epub 2018 Jan 11.
9
Toward an Integration of Deep Learning and Neuroscience.迈向深度学习与神经科学的整合。
Front Comput Neurosci. 2016 Sep 14;10:94. doi: 10.3389/fncom.2016.00094. eCollection 2016.
10
Introducing the Open Affective Standardized Image Set (OASIS).介绍开放式情感标准化图像集(OASIS)。
Behav Res Methods. 2017 Apr;49(2):457-470. doi: 10.3758/s13428-016-0715-3.