• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多模态联合聚类及其在无监督属性发现中的应用。

Multi-Modal Joint Clustering With Application for Unsupervised Attribute Discovery.

出版信息

IEEE Trans Image Process. 2018 Sep;27(9):4345-4356. doi: 10.1109/TIP.2018.2831454.

DOI:10.1109/TIP.2018.2831454
PMID:29870352
Abstract

Utilizing multiple descriptions/views of an object is often useful in image clustering tasks. Despite many works that have been proposed to effectively cluster multi-view data, there are still unaddressed problems such as the errors introduced by the traditional spectral-based clustering methods due to the two disjoint stages: 1) eigendecomposition and 2) the discretization of new representations. In this paper, we propose a unified clustering framework which jointly learns the two stages together as well as utilizing multiple descriptions of the data. More specifically, two learning methods from this framework are proposed: 1) through a graph construction from different views and 2) through combining multiple graphs. Furthermore, benefiting from the separability and local graph preserving properties of the proposed methods, a novel unsupervised automatic attribute discovery method is proposed. We validate the efficacy of our methods on five data sets, showing that the proposed joint learning clustering methods outperform the recent state-of-the-art methods. We also show that it is possible to derive a novel method to address the unsupervised automatic attribute discovery tasks.

摘要

在图像聚类任务中,通常利用对象的多个描述/视图是很有用的。尽管已经提出了许多有效的多视图数据聚类方法,但仍存在一些未解决的问题,例如由于传统基于谱的聚类方法存在两个不相交的阶段:1)特征分解和 2)新表示的离散化而引入的错误。在本文中,我们提出了一个统一的聚类框架,该框架可以同时学习这两个阶段,并利用数据的多个描述。更具体地说,从这个框架中提出了两种学习方法:1)通过来自不同视图的图构建,以及 2)通过组合多个图。此外,受益于所提出方法的可分离性和局部图保持特性,提出了一种新颖的无监督自动属性发现方法。我们在五个数据集上验证了我们方法的有效性,结果表明所提出的联合学习聚类方法优于最新的最先进方法。我们还表明,有可能衍生出一种新的方法来解决无监督自动属性发现任务。

相似文献

1
Multi-Modal Joint Clustering With Application for Unsupervised Attribute Discovery.多模态联合聚类及其在无监督属性发现中的应用。
IEEE Trans Image Process. 2018 Sep;27(9):4345-4356. doi: 10.1109/TIP.2018.2831454.
2
Multi-View Diffusion Process for Spectral Clustering and Image Retrieval.用于谱聚类和图像检索的多视图扩散过程
IEEE Trans Image Process. 2023;32:4610-4620. doi: 10.1109/TIP.2023.3302517. Epub 2023 Aug 16.
3
Pseudo-Label Guided Collective Matrix Factorization for Multiview Clustering.伪标签引导的多视图聚类协同矩阵分解。
IEEE Trans Cybern. 2022 Sep;52(9):8681-8691. doi: 10.1109/TCYB.2021.3051182. Epub 2022 Aug 18.
4
Constrained Multi-View Video Face Clustering.约束多视角视频人脸聚类。
IEEE Trans Image Process. 2015 Nov;24(11):4381-93. doi: 10.1109/TIP.2015.2463223. Epub 2015 Jul 30.
5
Balance guided incomplete multi-view spectral clustering.平衡引导的不完全多视图谱聚类。
Neural Netw. 2023 Sep;166:260-272. doi: 10.1016/j.neunet.2023.07.022. Epub 2023 Jul 20.
6
Consensus guided incomplete multi-view spectral clustering.共识指导的不完全多视图谱聚类。
Neural Netw. 2021 Jan;133:207-219. doi: 10.1016/j.neunet.2020.10.014. Epub 2020 Nov 11.
7
Contextual Correlation Preserving Multiview Featured Graph Clustering.上下文相关保持的多视图特征图聚类。
IEEE Trans Cybern. 2020 Oct;50(10):4318-4331. doi: 10.1109/TCYB.2019.2926431. Epub 2019 Jul 19.
8
Adaptive Weighted Graph Fusion Incomplete Multi-View Subspace Clustering.自适应加权图融合不完全多视图子空间聚类
Sensors (Basel). 2020 Oct 10;20(20):5755. doi: 10.3390/s20205755.
9
Multi-view spectral clustering via common structure maximization of local and global representations.基于局部和全局表示的公共结构最大化的多视图谱聚类。
Neural Netw. 2021 Nov;143:595-606. doi: 10.1016/j.neunet.2021.07.020. Epub 2021 Jul 21.
10
Unsupervised and Semisupervised Projection With Graph Optimization.基于图优化的无监督和半监督投影
IEEE Trans Neural Netw Learn Syst. 2021 Apr;32(4):1547-1559. doi: 10.1109/TNNLS.2020.2984958. Epub 2021 Apr 2.