• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多模态融合的英语教学中互动与心理特征的联合分析。

Joint analysis of interaction and psychological characteristics in english teaching based on multimodal integration.

机构信息

School of Culture and Education, Shaanxi University of Science & Technology, 710021, Xi'an, Shaanxi, China.

出版信息

BMC Psychol. 2024 Mar 4;12(1):121. doi: 10.1186/s40359-024-01585-0.

DOI:10.1186/s40359-024-01585-0
PMID:38439095
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10913431/
Abstract

The intersection of psychology and English teaching is profound, as the application of psychological principles not only guides specific English instruction but also elevates the overall quality of teaching. This paper takes a multimodal approach, incorporating image, acoustics, and text information, to construct a joint analysis model for English teaching interaction and psychological characteristics. The novel addition of an attention mechanism in the multimodal fusion process enables the development of an English teaching psychological characteristics recognition model. The initial step involves balancing the proportions of each emotion, followed by achieving multimodal alignment. In the cross-modal stage, the interaction of image, acoustics, and text is facilitated through a cross-modal attention mechanism. The utilization of a multi-attention mechanism not only enhances the network's representation capabilities but also streamlines the complexity of the model. Empirical results demonstrate the model's proficiency in accurately identifying five psychological characteristics. The proposed method achieves a classification accuracy of 90.40% for psychological features, with a commendable accuracy of 78.47% in multimodal classification. Furthermore, the incorporation of the attention mechanism in feature fusion contributes to an improved fusion effect.

摘要

心理学与英语教学的交集是深远的,因为心理学原理的应用不仅指导具体的英语教学,而且提升教学的整体质量。本文采用多模态方法,融合图像、声学和文本信息,构建英语教学互动和心理特征的联合分析模型。在多模态融合过程中,新颖的注意力机制的加入使得英语教学心理特征识别模型得以发展。最初的步骤是平衡每种情感的比例,然后实现多模态对齐。在跨模态阶段,通过跨模态注意力机制促进图像、声学和文本的交互。多注意力机制的利用不仅增强了网络的表示能力,而且简化了模型的复杂性。实证结果表明,该模型在准确识别五种心理特征方面表现出色。所提出的方法在心理特征分类方面达到了 90.40%的准确率,在多模态分类方面达到了 78.47%的准确率。此外,在特征融合中加入注意力机制有助于改善融合效果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/c57b3d385306/40359_2024_1585_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/66ed2acf9ee2/40359_2024_1585_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/dd3b5dddfb68/40359_2024_1585_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/803742d7e472/40359_2024_1585_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/de9e40d986eb/40359_2024_1585_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/b4e714c27d98/40359_2024_1585_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/c57b3d385306/40359_2024_1585_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/66ed2acf9ee2/40359_2024_1585_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/dd3b5dddfb68/40359_2024_1585_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/803742d7e472/40359_2024_1585_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/de9e40d986eb/40359_2024_1585_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/b4e714c27d98/40359_2024_1585_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83a7/10913431/c57b3d385306/40359_2024_1585_Fig6_HTML.jpg

相似文献

1
Joint analysis of interaction and psychological characteristics in english teaching based on multimodal integration.基于多模态融合的英语教学中互动与心理特征的联合分析。
BMC Psychol. 2024 Mar 4;12(1):121. doi: 10.1186/s40359-024-01585-0.
2
AVaTER: Fusing Audio, Visual, and Textual Modalities Using Cross-Modal Attention for Emotion Recognition.AVaTER:使用跨模态注意力融合音频、视觉和文本模态进行情感识别。
Sensors (Basel). 2024 Sep 10;24(18):5862. doi: 10.3390/s24185862.
3
Research on cross-modal emotion recognition based on multi-layer semantic fusion.基于多层语义融合的跨模态情感识别研究
Math Biosci Eng. 2024 Jan 17;21(2):2488-2514. doi: 10.3934/mbe.2024110.
4
MIFAD-Net: Multi-Layer Interactive Feature Fusion Network With Angular Distance Loss for Face Emotion Recognition.MIFAD-Net:用于面部表情识别的具有角距离损失的多层交互式特征融合网络
Front Psychol. 2021 Oct 22;12:762795. doi: 10.3389/fpsyg.2021.762795. eCollection 2021.
5
A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals.一种用于从脑电图和眼动信号中进行多模态情感识别的新型特征融合网络。
Front Neurosci. 2023 Aug 3;17:1234162. doi: 10.3389/fnins.2023.1234162. eCollection 2023.
6
CCGL-YOLOV5:A cross-modal cross-scale global-local attention YOLOV5 lung tumor detection model.CCGL-YOLOV5:一种跨模态跨尺度全局-局部注意力 YOLOV5 肺肿瘤检测模型。
Comput Biol Med. 2023 Oct;165:107387. doi: 10.1016/j.compbiomed.2023.107387. Epub 2023 Aug 28.
7
HiMul-LGG: A hierarchical decision fusion-based local-global graph neural network for multimodal emotion recognition in conversation.HiMul-LGG:一种基于分层决策融合的局部-全局图神经网络,用于对话中的多模态情感识别。
Neural Netw. 2025 Jan;181:106764. doi: 10.1016/j.neunet.2024.106764. Epub 2024 Sep 28.
8
Hierarchical Attention-Based Multimodal Fusion Network for Video Emotion Recognition.基于分层注意力的多模态融合网络的视频情绪识别。
Comput Intell Neurosci. 2021 Sep 25;2021:5585041. doi: 10.1155/2021/5585041. eCollection 2021.
9
Multimodal Emotion Recognition Based on Cascaded Multichannel and Hierarchical Fusion.基于级联多通道和分层融合的多模态情绪识别。
Comput Intell Neurosci. 2023 Jan 5;2023:9645611. doi: 10.1155/2023/9645611. eCollection 2023.
10
Multimodal English Teaching Classroom Interaction Based on Artificial Neural Network.基于人工神经网络的多模态英语教学课堂互动
Comput Intell Neurosci. 2022 May 28;2022:3141451. doi: 10.1155/2022/3141451. eCollection 2022.

本文引用的文献

1
Graph convolutional networks: a comprehensive review.图卷积网络:全面综述。
Comput Soc Netw. 2019;6(1):11. doi: 10.1186/s40649-019-0069-y. Epub 2019 Nov 10.
2
The multi-modal fusion in visual question answering: a review of attention mechanisms.视觉问答中的多模态融合:注意力机制综述
PeerJ Comput Sci. 2023 May 30;9:e1400. doi: 10.7717/peerj-cs.1400. eCollection 2023.
3
Advances in Multimodal Emotion Recognition Based on Brain-Computer Interfaces.基于脑机接口的多模态情感识别进展
Brain Sci. 2020 Sep 29;10(10):687. doi: 10.3390/brainsci10100687.
4
A Comprehensive Survey on Graph Neural Networks.图神经网络综述。
IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):4-24. doi: 10.1109/TNNLS.2020.2978386. Epub 2021 Jan 4.
5
A Review of Emotion Recognition Using Physiological Signals.基于生理信号的情感识别研究综述。
Sensors (Basel). 2018 Jun 28;18(7):2074. doi: 10.3390/s18072074.