• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Cross-Domain Facial Expression Recognition via Contrastive Warm up and Complexity-Aware Self-Training.

作者信息

Li Yingjian, Huang Jiaxing, Lu Shijian, Zhang Zheng, Lu Guangming

出版信息

IEEE Trans Image Process. 2023;32:5438-5450. doi: 10.1109/TIP.2023.3318955. Epub 2023 Oct 5.

DOI:10.1109/TIP.2023.3318955
PMID:37773906
Abstract

Unsupervised cross-domain Facial Expression Recognition (FER) aims to transfer the knowledge from a labeled source domain to an unlabeled target domain. Existing methods strive to reduce the discrepancy between source and target domain, but cannot effectively explore the abundant semantic information of the target domain due to the absence of target labels. To this end, we propose a novel framework via Contrastive Warm up and Complexity-aware Self-Training (namely CWCST), which facilitates source knowledge transfer and target semantic learning jointly. Specifically, we formulate a contrastive warm up strategy via features, momentum features, and learnable category centers to concurrently learn discriminative representations and narrow the domain gap, which benefits domain adaptation by generating more accurate target pseudo labels. Moreover, to deal with the inevitable noise in pseudo labels, we develop complexity-aware self-training with a label selection module based on prediction entropy, which iteratively generates pseudo labels and adaptively chooses the reliable ones for training, ultimately yielding effective target semantics exploration. Furthermore, by jointly using the two mentioned components, our framework enables to effectively utilize the source knowledge and target semantic information by source-target co- training. In addition, our framework can be easily incorporated into other baselines with consistent performance improvements. Extensive experimental results on seven databases show the superior performance of the proposed method against various baselines.

摘要

相似文献

1
Cross-Domain Facial Expression Recognition via Contrastive Warm up and Complexity-Aware Self-Training.
IEEE Trans Image Process. 2023;32:5438-5450. doi: 10.1109/TIP.2023.3318955. Epub 2023 Oct 5.
2
Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation.基于伪标签自训练的局部对比损失的半监督医学图像分割。
Med Image Anal. 2023 Jul;87:102792. doi: 10.1016/j.media.2023.102792. Epub 2023 Mar 11.
3
Margin Preserving Self-Paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation.保留边界的自定进度对比学习在医学图像分割中的域自适应。
IEEE J Biomed Health Inform. 2022 Feb;26(2):638-647. doi: 10.1109/JBHI.2022.3140853. Epub 2022 Feb 4.
4
Contrastive learning of graphs under label noise.图在标签噪声下的对比学习。
Neural Netw. 2024 Apr;172:106113. doi: 10.1016/j.neunet.2024.106113. Epub 2024 Jan 6.
5
Domain-interactive Contrastive Learning and Prototype-guided Self-training for Cross-domain Polyp Segmentation.用于跨域息肉分割的域交互对比学习和原型引导自训练
IEEE Trans Med Imaging. 2024 Aug 14;PP. doi: 10.1109/TMI.2024.3443262.
6
Memory consistent unsupervised off-the-shelf model adaptation for source-relaxed medical image segmentation.记忆一致的无监督现成模型适配,用于源宽松的医学图像分割。
Med Image Anal. 2023 Jan;83:102641. doi: 10.1016/j.media.2022.102641. Epub 2022 Oct 1.
7
Adaptive Contrastive Learning with Label Consistency for Source Data Free Unsupervised Domain Adaptation.基于标签一致性的自适应对比学习在源数据自由无监督域自适应中的应用。
Sensors (Basel). 2022 Jun 2;22(11):4238. doi: 10.3390/s22114238.
8
Source free domain adaptation for medical image segmentation with fourier style mining.基于傅里叶风格挖掘的源自由域自适应医学图像分割。
Med Image Anal. 2022 Jul;79:102457. doi: 10.1016/j.media.2022.102457. Epub 2022 Apr 12.
9
Class-Incremental Unsupervised Domain Adaptation via Pseudo-Label Distillation.通过伪标签蒸馏实现类别增量无监督域适应
IEEE Trans Image Process. 2024;33:1188-1198. doi: 10.1109/TIP.2024.3357258. Epub 2024 Feb 9.
10
Robust Cross-Domain Pseudo-Labeling and Contrastive Learning for Unsupervised Domain Adaptation NIR-VIS Face Recognition.用于无监督域自适应近红外-可见光人脸识别的鲁棒跨域伪标签和对比学习。
IEEE Trans Image Process. 2023;32:5231-5244. doi: 10.1109/TIP.2023.3309110. Epub 2023 Sep 20.