• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

AMCFCN:注意力多视图对比融合聚类网络。

AMCFCN: attentive multi-view contrastive fusion clustering net.

作者信息

Xiao Huarun, Hong Zhiyong, Xiong Liping, Zeng Zhiqiang

机构信息

College of Electronic and Information Engineering, Wuyi University, Jiangmen, Guangdong, China.

出版信息

PeerJ Comput Sci. 2024 Mar 5;10:e1906. doi: 10.7717/peerj-cs.1906. eCollection 2024.

DOI:10.7717/peerj-cs.1906
PMID:39669450
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11636696/
Abstract

Advances in deep learning have propelled the evolution of multi-view clustering techniques, which strive to obtain a view-common representation from multi-view datasets. However, the contemporary multi-view clustering community confronts two prominent challenges. One is that view-specific representations lack guarantees to reduce noise introduction, and another is that the fusion process compromises view-specific representations, resulting in the inability to capture efficient information from multi-view data. This may negatively affect the accuracy of the clustering results. In this article, we introduce a novel technique named the "contrastive attentive strategy" to address the above problems. Our approach effectively extracts robust view-specific representations from multi-view data with reduced noise while preserving view completeness. This results in the extraction of consistent representations from multi-view data while preserving the features of view-specific representations. We integrate view-specific encoders, a hybrid attentive module, a fusion module, and deep clustering into a unified framework called AMCFCN. Experimental results on four multi-view datasets demonstrate that our method, AMCFCN, outperforms seven competitive multi-view clustering methods. Our source code is available at https://github.com/xiaohuarun/AMCFCN.

摘要

深度学习的进展推动了多视图聚类技术的发展,这些技术致力于从多视图数据集中获得视图通用表示。然而,当代多视图聚类领域面临两个突出挑战。一个是特定视图表示缺乏减少噪声引入的保障,另一个是融合过程会损害特定视图表示,导致无法从多视图数据中捕获有效信息。这可能会对聚类结果的准确性产生负面影响。在本文中,我们引入了一种名为“对比注意力策略”的新技术来解决上述问题。我们的方法有效地从多视图数据中提取出具有减少噪声的鲁棒特定视图表示,同时保留视图完整性。这导致在保留特定视图表示特征的同时,从多视图数据中提取出一致的表示。我们将特定视图编码器、混合注意力模块、融合模块和深度聚类集成到一个名为AMCFCN的统一框架中。在四个多视图数据集上的实验结果表明,我们的方法AMCFCN优于七种具有竞争力的多视图聚类方法。我们的源代码可在https://github.com/xiaohuarun/AMCFCN获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/241b205a63bb/peerj-cs-10-1906-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/84b513331554/peerj-cs-10-1906-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/8e5e3c486e4f/peerj-cs-10-1906-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/ccbb056788cc/peerj-cs-10-1906-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/94f6cd07eef7/peerj-cs-10-1906-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/4ab8a2de8709/peerj-cs-10-1906-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/19198b0ce630/peerj-cs-10-1906-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/c05498f6246c/peerj-cs-10-1906-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/efb4548f4405/peerj-cs-10-1906-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/4a87576e8284/peerj-cs-10-1906-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/241b205a63bb/peerj-cs-10-1906-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/84b513331554/peerj-cs-10-1906-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/8e5e3c486e4f/peerj-cs-10-1906-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/ccbb056788cc/peerj-cs-10-1906-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/94f6cd07eef7/peerj-cs-10-1906-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/4ab8a2de8709/peerj-cs-10-1906-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/19198b0ce630/peerj-cs-10-1906-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/c05498f6246c/peerj-cs-10-1906-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/efb4548f4405/peerj-cs-10-1906-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/4a87576e8284/peerj-cs-10-1906-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7e6/11636696/241b205a63bb/peerj-cs-10-1906-g010.jpg

相似文献

1
AMCFCN: attentive multi-view contrastive fusion clustering net.AMCFCN:注意力多视图对比融合聚类网络。
PeerJ Comput Sci. 2024 Mar 5;10:e1906. doi: 10.7717/peerj-cs.1906. eCollection 2024.
2
MFC-ACL: Multi-view fusion clustering with attentive contrastive learning.MFC-ACL:基于注意力对比学习的多视图融合聚类
Neural Netw. 2025 Apr;184:107055. doi: 10.1016/j.neunet.2024.107055. Epub 2024 Dec 20.
3
Composite attention mechanism network for deep contrastive multi-view clustering.用于深度对比多视图聚类的组合注意力机制网络。
Neural Netw. 2024 Aug;176:106361. doi: 10.1016/j.neunet.2024.106361. Epub 2024 May 3.
4
Dual Contrast-Driven Deep Multi-View Clustering.双对比度驱动的深度多视图聚类
IEEE Trans Image Process. 2024;33:4753-4764. doi: 10.1109/TIP.2024.3444269. Epub 2024 Aug 30.
5
ECCT: Efficient Contrastive Clustering via Pseudo-Siamese Vision Transformer and Multi-view Augmentation.ECCT:基于伪孪生视觉 Transformer 和多视图增强的高效对比聚类。
Neural Netw. 2024 Dec;180:106684. doi: 10.1016/j.neunet.2024.106684. Epub 2024 Sep 2.
6
Multi-level multi-view network based on structural contrastive learning for scRNA-seq data clustering.基于结构对比学习的多层次多视图网络用于 scRNA-seq 数据聚类。
Brief Bioinform. 2024 Sep 23;25(6). doi: 10.1093/bib/bbae562.
7
Contrastive and adversarial regularized multi-level representation learning for incomplete multi-view clustering.基于对比和对抗正则化的多层次表示学习方法在不完全多视图聚类中的应用。
Neural Netw. 2024 Apr;172:106102. doi: 10.1016/j.neunet.2024.106102. Epub 2024 Jan 8.
8
Graph Embedding Contrastive Multi-Modal Representation Learning for Clustering.用于聚类的图嵌入对比多模态表示学习
IEEE Trans Image Process. 2023;32:1170-1183. doi: 10.1109/TIP.2023.3240863. Epub 2023 Feb 13.
9
Clustering Enhanced Multiplex Graph Contrastive Representation Learning.聚类增强的多通道图对比表示学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1341-1355. doi: 10.1109/TNNLS.2023.3334751. Epub 2025 Jan 7.
10
Partially multi-view clustering via re-alignment.通过重新对齐实现部分多视图聚类
Neural Netw. 2025 Feb;182:106884. doi: 10.1016/j.neunet.2024.106884. Epub 2024 Nov 12.

本文引用的文献

1
Dual Contrastive Prediction for Incomplete Multi-View Representation Learning.用于不完整多视图表示学习的双对比预测
IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4447-4461. doi: 10.1109/TPAMI.2022.3197238. Epub 2023 Mar 7.
2
Autoencoder in Autoencoder Networks.自动编码器网络中的自动编码器
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2263-2275. doi: 10.1109/TNNLS.2022.3189239. Epub 2024 Feb 5.
3
Deep Multiview Collaborative Clustering.深度多视图协同聚类
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):516-526. doi: 10.1109/TNNLS.2021.3097748. Epub 2023 Jan 5.
4
Deep divergence-based approach to clustering.基于深度分歧的聚类方法。
Neural Netw. 2019 May;113:91-101. doi: 10.1016/j.neunet.2019.01.015. Epub 2019 Feb 8.
5
Rank-Constrained Spectral Clustering With Flexible Embedding.具有灵活嵌入的秩约束谱聚类
IEEE Trans Neural Netw Learn Syst. 2018 Dec;29(12):6073-6082. doi: 10.1109/TNNLS.2018.2817538. Epub 2018 Apr 19.