• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过最优传输学习通用语义用于对比多视图聚类

Learning Common Semantics via Optimal Transport for Contrastive Multi-View Clustering.

作者信息

Zhang Qian, Zhang Lin, Song Ran, Cong Runmin, Liu Yonghuai, Zhang Wei

出版信息

IEEE Trans Image Process. 2024;33:4501-4515. doi: 10.1109/TIP.2024.3436615. Epub 2024 Aug 19.

DOI:10.1109/TIP.2024.3436615
PMID:39115994
Abstract

Multi-view clustering aims to learn discriminative representations from multi-view data. Although existing methods show impressive performance by leveraging contrastive learning to tackle the representation gap between every two views, they share the common limitation of not performing semantic alignment from a global perspective, resulting in the undermining of semantic patterns in multi-view data. This paper presents CSOT, namely Common Semantics via Optimal Transport, to boost contrastive multi-view clustering via semantic learning in a common space that integrates all views. Through optimal transport, the samples in multiple views are mapped to the joint clusters which represent the multi-view semantic patterns in the common space. With the semantic assignment derived from the optimal transport plan, we design a semantic learning module where the soft assignment vector works as a global supervision to enforce the model to learn consistent semantics among all views. Moreover, we propose a semantic-aware re-weighting strategy to treat samples differently according to their semantic significance, which improves the effectiveness of cross-view contrastive representation learning. Extensive experimental results demonstrate that CSOT achieves the state-of-the-art clustering performance.

摘要

多视图聚类旨在从多视图数据中学习有判别力的表示。尽管现有方法通过利用对比学习来解决每两个视图之间的表示差距,展现出了令人印象深刻的性能,但它们都存在一个共同的局限性,即没有从全局角度进行语义对齐,导致多视图数据中的语义模式被破坏。本文提出了CSOT,即通过最优传输实现共同语义,以在整合所有视图的公共空间中通过语义学习来增强对比多视图聚类。通过最优传输,多个视图中的样本被映射到联合聚类中,这些联合聚类代表了公共空间中的多视图语义模式。利用从最优传输计划导出的语义分配,我们设计了一个语义学习模块,其中软分配向量作为全局监督,以强制模型在所有视图之间学习一致的语义。此外,我们提出了一种语义感知重新加权策略,根据样本的语义重要性对其进行不同的处理,这提高了跨视图对比表示学习的有效性。大量实验结果表明,CSOT实现了当前最优的聚类性能。

相似文献

1
Learning Common Semantics via Optimal Transport for Contrastive Multi-View Clustering.通过最优传输学习通用语义用于对比多视图聚类
IEEE Trans Image Process. 2024;33:4501-4515. doi: 10.1109/TIP.2024.3436615. Epub 2024 Aug 19.
2
Progressive Neighbor-masked Contrastive Learning for Fusion-style Deep Multi-view Clustering.基于渐进式邻域掩蔽对比学习的融合式深度多视图聚类方法。
Neural Netw. 2024 Nov;179:106503. doi: 10.1016/j.neunet.2024.106503. Epub 2024 Jul 1.
3
Selective Contrastive Learning for Unpaired Multi-View Clustering.用于无配对多视图聚类的选择性对比学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1749-1763. doi: 10.1109/TNNLS.2023.3329658. Epub 2025 Jan 7.
4
Composite attention mechanism network for deep contrastive multi-view clustering.用于深度对比多视图聚类的组合注意力机制网络。
Neural Netw. 2024 Aug;176:106361. doi: 10.1016/j.neunet.2024.106361. Epub 2024 May 3.
5
Deep dual incomplete multi-view multi-label classification via label semantic-guided contrastive learning.基于标签语义引导对比学习的深度对偶不完全多视图多标签分类。
Neural Netw. 2024 Dec;180:106674. doi: 10.1016/j.neunet.2024.106674. Epub 2024 Aug 30.
6
Clustering Enhanced Multiplex Graph Contrastive Representation Learning.聚类增强的多通道图对比表示学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1341-1355. doi: 10.1109/TNNLS.2023.3334751. Epub 2025 Jan 7.
7
Margin Preserving Self-Paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation.保留边界的自定进度对比学习在医学图像分割中的域自适应。
IEEE J Biomed Health Inform. 2022 Feb;26(2):638-647. doi: 10.1109/JBHI.2022.3140853. Epub 2022 Feb 4.
8
Contrastive Multi-View Kernel Learning.对比多视角核学习。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9552-9566. doi: 10.1109/TPAMI.2023.3253211. Epub 2023 Jun 30.
9
ECCT: Efficient Contrastive Clustering via Pseudo-Siamese Vision Transformer and Multi-view Augmentation.ECCT:基于伪孪生视觉 Transformer 和多视图增强的高效对比聚类。
Neural Netw. 2024 Dec;180:106684. doi: 10.1016/j.neunet.2024.106684. Epub 2024 Sep 2.
10
Molecular property prediction by semantic-invariant contrastive learning.基于语义不变对比学习的分子性质预测。
Bioinformatics. 2023 Aug 1;39(8). doi: 10.1093/bioinformatics/btad462.