• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用多任务学习进行主观情感标注建模。

Modeling Subjective Affect Annotations with Multi-Task Learning.

机构信息

Department of IT, Multimedia and Telecommunications (IMT), Universitat Oberta de Catalunya, 08018 Barcelona, Spain.

出版信息

Sensors (Basel). 2022 Jul 13;22(14):5245. doi: 10.3390/s22145245.

DOI:10.3390/s22145245
PMID:35890925
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9319580/
Abstract

In supervised learning, the generalization capabilities of trained models are based on the available annotations. Usually, multiple annotators are asked to annotate the dataset samples and, then, the common practice is to aggregate the different annotations by computing average scores or majority voting, and train and test models on these aggregated annotations. However, this practice is not suitable for all types of problems, especially when the subjective information of each annotator matters for the task modeling. For example, emotions experienced while watching a video or evoked by other sources of content, such as news headlines, are subjective: different individuals might perceive or experience different emotions. The aggregated annotations in emotion modeling may lose the subjective information and actually represent an annotation bias. In this paper, we highlight the weaknesses of models that are trained on aggregated annotations for modeling tasks related to affect. More concretely, we compare two generic Deep Learning architectures: a Single-Task (ST) architecture and a Multi-Task (MT) architecture. While the ST architecture models single emotional perception each time, the MT architecture jointly models every single annotation and the aggregated annotations at once. Our results show that the MT approach can more accurately model every single annotation and the aggregated annotations when compared to methods that are directly trained on the aggregated annotations. Furthermore, the MT approach achieves state-of-the-art results on the COGNIMUSE, IEMOCAP, and SemEval_2007 benchmarks.

摘要

在监督学习中,训练模型的泛化能力基于可用的注释。通常,会要求多个注释者对数据集样本进行注释,然后,常见的做法是通过计算平均分数或多数投票来聚合不同的注释,并在这些聚合的注释上训练和测试模型。然而,这种做法并不适用于所有类型的问题,特别是当每个注释者的主观信息对任务建模很重要时。例如,观看视频时体验到的情绪或其他内容来源(如新闻标题)引发的情绪是主观的:不同的人可能会感知或体验到不同的情绪。在情绪建模中,聚合的注释可能会丢失主观信息,实际上代表了注释偏差。在本文中,我们强调了在与情感相关的建模任务中,对聚合注释进行训练的模型的弱点。更具体地说,我们比较了两种通用的深度学习架构:单任务 (ST) 架构和多任务 (MT) 架构。虽然 ST 架构每次都对单一情感感知进行建模,但 MT 架构同时对每个单一注释和聚合注释进行建模。我们的结果表明,与直接在聚合注释上训练的方法相比,MT 方法可以更准确地对每个单一注释和聚合注释进行建模。此外,MT 方法在 COGNIMUSE、IEMOCAP 和 SemEval_2007 基准测试中取得了最先进的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/0ea4d2d0ca87/sensors-22-05245-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/9d07ecc048db/sensors-22-05245-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/3322ba85928a/sensors-22-05245-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/5c12149e7d83/sensors-22-05245-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/454474d07163/sensors-22-05245-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/0ea4d2d0ca87/sensors-22-05245-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/9d07ecc048db/sensors-22-05245-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/3322ba85928a/sensors-22-05245-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/5c12149e7d83/sensors-22-05245-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/454474d07163/sensors-22-05245-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a7eb/9319580/0ea4d2d0ca87/sensors-22-05245-g005.jpg

相似文献

1
Modeling Subjective Affect Annotations with Multi-Task Learning.用多任务学习进行主观情感标注建模。
Sensors (Basel). 2022 Jul 13;22(14):5245. doi: 10.3390/s22145245.
2
Adaptive Annotation Correlation Based Multi-Annotation Learning for Calibrated Medical Image Segmentation.基于自适应标注相关性的多标注学习用于校准医学图像分割
IEEE J Biomed Health Inform. 2024 Dec;28(12):7175-7183. doi: 10.1109/JBHI.2024.3451210. Epub 2024 Dec 5.
3
Modeling annotator preference and stochastic annotation error for medical image segmentation.医学图像分割中的标注者偏好建模和随机标注错误。
Med Image Anal. 2024 Feb;92:103028. doi: 10.1016/j.media.2023.103028. Epub 2023 Nov 17.
4
Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists.快速机器学习在医学领域的标注:一款用于胃肠病学家的半自动视频标注工具。
Biomed Eng Online. 2022 May 25;21(1):33. doi: 10.1186/s12938-022-01001-x.
5
Doing More With Less: A Multitask Deep Learning Approach in Plant Phenotyping.事半功倍:植物表型分析中的多任务深度学习方法
Front Plant Sci. 2020 Feb 28;11:141. doi: 10.3389/fpls.2020.00141. eCollection 2020.
6
Transferring Annotator- and Instance-Dependent Transition Matrix for Learning From Crowds.用于众包学习的转移依赖于注释器和实例的转移矩阵
IEEE Trans Pattern Anal Mach Intell. 2024 Nov;46(11):7377-7391. doi: 10.1109/TPAMI.2024.3388209. Epub 2024 Oct 3.
7
Semi-supervised training of deep convolutional neural networks with heterogeneous data and few local annotations: An experiment on prostate histopathology image classification.基于异构数据和少量局部标注的深度卷积神经网络的半监督学习:前列腺组织病理学图像分类实验。
Med Image Anal. 2021 Oct;73:102165. doi: 10.1016/j.media.2021.102165. Epub 2021 Jul 14.
8
RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning.RIL-Contour:一款用于深度学习的医学影像数据集标注工具。
J Digit Imaging. 2019 Aug;32(4):571-581. doi: 10.1007/s10278-019-00232-0.
9
Modeling multiple time series annotations as noisy distortions of the ground truth: An Expectation-Maximization approach.将多个时间序列注释建模为真实情况的噪声失真:一种期望最大化方法。
IEEE Trans Affect Comput. 2018 Jan-Mar;9(1):76-89. doi: 10.1109/TAFFC.2016.2592918. Epub 2016 Jul 19.
10
Multi-task learning to leverage partially annotated data for PPI interface prediction.多任务学习利用部分注释数据进行 PPI 界面预测。
Sci Rep. 2022 Jun 21;12(1):10487. doi: 10.1038/s41598-022-13951-2.

本文引用的文献

1
Emotion dynamics in movie dialogues.电影对话中的情绪动态。
PLoS One. 2021 Sep 20;16(9):e0256153. doi: 10.1371/journal.pone.0256153. eCollection 2021.
2
Editorial: Everyday Beliefs About Emotion: Their Role in Subjective Experience, Emotion as an Interpersonal Process, and Emotion Theory.社论:关于情感的日常信念:它们在主观体验中的作用、作为人际过程的情感以及情感理论。
Front Psychol. 2020 Nov 4;11:597412. doi: 10.3389/fpsyg.2020.597412. eCollection 2020.
3
A multimodal convolutional neuro-fuzzy network for emotion understanding of movie clips.
用于电影片段情绪理解的多模态卷积神经模糊网络。
Neural Netw. 2019 Oct;118:208-219. doi: 10.1016/j.neunet.2019.06.010. Epub 2019 Jul 2.
4
Maps of subjective feelings.主观感受图谱。
Proc Natl Acad Sci U S A. 2018 Sep 11;115(37):9198-9203. doi: 10.1073/pnas.1807390115. Epub 2018 Aug 28.