Suppr超能文献

基于深度学习的电影话语互动环境的多模态话语分析。

Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning.

机构信息

School of Animation and Digital Arts, Communication University of Zhejiang, Hangzhou 310018, China.

College of Film, Shanghai Theatre Academy, Shanghai 201112, China.

出版信息

J Environ Public Health. 2022 Aug 31;2022:1606926. doi: 10.1155/2022/1606926. eCollection 2022.

Abstract

With the advent of the information age, language is no longer the only way to construct meaning. Besides language, a variety of social symbols, such as gestures, images, music, three-dimensional animation, and so on, are more and more involved in the social practice of meaning construction. Traditional single-modal sentiment analysis methods have a single expression form and cannot fully utilize multiple modal information, resulting in low sentiment classification accuracy. Deep learning technology can automatically mine emotional states in images, texts, and videos and can effectively combine multiple modal information. In the book , the first systematic and comprehensive visual grammatical analysis framework is proposed and the expression of image meaning is discussed from the perspectives of representational meaning, interactive meaning, and composition meaning, compared with the three pure theoretical functions in Halliday's systemic functional grammar. In the past, people often discussed films from the macro perspectives of literary criticism, film criticism, psychology, aesthetics, and so on, and multimodal analysis theory provides film researchers with a set of methods to analyze images, music, and words at the same time. In view of the above considerations, Mu Wen adopts the perspective of social semiotics, based on Halliday's systemic functional linguistics and Gan He's "visual grammar," and builds a multimodal interaction model as a tool to analyze film discourse by referring to evaluation theory.

摘要

随着信息时代的到来,语言已不再是构建意义的唯一途径。除了语言,各种社会符号,如手势、图像、音乐、三维动画等,越来越多地参与到意义构建的社会实践中。传统的单模态情感分析方法表达形式单一,不能充分利用多模态信息,导致情感分类精度低。深度学习技术可以自动挖掘图像、文本和视频中的情感状态,并能有效地结合多种模态信息。在书中,提出了第一个系统而全面的视觉语法分析框架,并从再现意义、互动意义和构图意义三个方面探讨了图像意义的表达,与韩礼德系统功能语法中的三个纯理论功能进行了比较。过去,人们通常从文学批评、电影批评、心理学、美学等宏观角度讨论电影,而多模态分析理论为电影研究人员提供了一套同时分析图像、音乐和文字的方法。基于以上考虑,穆文采用社会符号学的视角,以韩礼德的系统功能语言学和甘荷的“视觉语法”为基础,构建了一个多模态互动模型,作为分析电影话语的工具,并参考评价理论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a38/9451998/4906a99b2960/JEPH2022-1606926.001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验