• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多视图亲和性的投影对齐,通过局部保持优化实现无监督域自适应。

Multi-view affinity-based projection alignment for unsupervised domain adaptation via locality preserving optimization.

作者信息

Luo Weibin, Chen Mingye, Gao Jian, Zhu Yanping, Wang Fang, Zhu Chenyang

机构信息

School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, 213000, China.

Department of Computer Science, Brunel University London, London, UB8 3PH, UK.

出版信息

Sci Rep. 2025 Jul 1;15(1):20452. doi: 10.1038/s41598-025-05331-3.

DOI:10.1038/s41598-025-05331-3
PMID:40595895
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12216763/
Abstract

Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain with differing data distributions. However, it remains difficult due to noisy pseudo-labels in the target domain, inadequate modeling of local geometric structure, and reliance on a single input view that limits representational diversity in challenging tasks. We propose a framework named Multi-view Affinity-based Projection Alignment (MAPA) that uses a teacher-student network and multi-view augmentation to stabilize pseudo-labels and enhance feature diversity. MAPA transforms each sample into multiple augmented views, constructs a unified affinity matrix that combines semantic cues from pseudo-labels with feature-based distances, and then learns a locality-preserving projection to align source and target data in a shared low-dimensional space. An iterative strategy refines pseudo-labels by discarding low-confidence samples, thereby raising label quality and strengthening supervision for the target domain. MAPA also employs a consistency-weighted fusion mechanism to merge predictions from multiple views, improving stability under domain shift. Finally, MAPA leverages class-centric and cluster-level relationships in the projected space to further refine label assignments, enhancing the overall adaptation process. Experimental results on Office-Home, ImageCLEF, and VisDA-2017 show that MAPA surpasses recent state-of-the-art methods, and it maintains robust performance across backbones including ResNet-50, ResNet-101, and Vision Transformer (ViT).

摘要

无监督域适应(UDA)旨在将知识从有标签的源域转移到数据分布不同的无标签目标域。然而,由于目标域中存在噪声伪标签、局部几何结构建模不足以及依赖单一输入视图限制了挑战性任务中的表示多样性,这一过程仍然困难重重。我们提出了一个名为基于多视图亲和性的投影对齐(MAPA)的框架,该框架使用师生网络和多视图增强来稳定伪标签并增强特征多样性。MAPA将每个样本转换为多个增强视图,构建一个统一的亲和矩阵,将来自伪标签的语义线索与基于特征的距离相结合,然后学习一个保持局部性的投影,以在共享的低维空间中对齐源数据和目标数据。一种迭代策略通过丢弃低置信度样本细化伪标签,从而提高标签质量并加强对目标域的监督。MAPA还采用了一种一致性加权融合机制来合并来自多个视图的预测,提高域转移下的稳定性。最后,MAPA利用投影空间中以类为中心和聚类级别的关系进一步细化标签分配,增强整体适应过程。在Office-Home数据集、ImageCLEF数据集和VisDA-2017数据集上的实验结果表明,MAPA优于最近的先进方法,并且在包括ResNet-50、ResNet-101和视觉Transformer(ViT)在内的各种骨干网络上都保持了稳健的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/e10a47fd801c/41598_2025_5331_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/f919689f16eb/41598_2025_5331_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/b0d5b4e1b968/41598_2025_5331_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/11f23f4fcc42/41598_2025_5331_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/5def09820bff/41598_2025_5331_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/db7d33e7e59b/41598_2025_5331_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/f789b99d1f48/41598_2025_5331_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/6c69f220d39b/41598_2025_5331_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/f539a85e6e95/41598_2025_5331_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/e10a47fd801c/41598_2025_5331_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/f919689f16eb/41598_2025_5331_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/b0d5b4e1b968/41598_2025_5331_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/11f23f4fcc42/41598_2025_5331_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/5def09820bff/41598_2025_5331_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/db7d33e7e59b/41598_2025_5331_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/f789b99d1f48/41598_2025_5331_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/6c69f220d39b/41598_2025_5331_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/f539a85e6e95/41598_2025_5331_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e10/12216763/e10a47fd801c/41598_2025_5331_Fig8_HTML.jpg

相似文献

1
Multi-view affinity-based projection alignment for unsupervised domain adaptation via locality preserving optimization.基于多视图亲和性的投影对齐,通过局部保持优化实现无监督域自适应。
Sci Rep. 2025 Jul 1;15(1):20452. doi: 10.1038/s41598-025-05331-3.
2
Unified Domain Adaptive Semantic Segmentation.统一域自适应语义分割
IEEE Trans Pattern Anal Mach Intell. 2025 Aug;47(8):6731-6748. doi: 10.1109/TPAMI.2025.3562999.
3
Unsupervised cross-modality domain adaptation via source-domain labels guided contrastive learning for medical image segmentation.通过源域标签引导的对比学习实现医学图像分割的无监督跨模态域适应
Med Biol Eng Comput. 2025 Feb 13. doi: 10.1007/s11517-025-03312-2.
4
Unsupervised domain adaptation multi-level adversarial learning-based crossing-domain retinal vessel segmentation.基于无监督域自适应多层次对抗学习的跨域视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108759. doi: 10.1016/j.compbiomed.2024.108759. Epub 2024 Jun 24.
5
Modeling the Label Distributions for Weakly-Supervised Semantic Segmentation.为弱监督语义分割建模标签分布
IEEE Trans Pattern Anal Mach Intell. 2025 Aug;47(8):6290-6306. doi: 10.1109/TPAMI.2025.3557047.
6
Enhancing microbe-disease association prediction via multi-view graph convolution and latent feature learning.通过多视图图卷积和潜在特征学习增强微生物-疾病关联预测
Comput Biol Chem. 2025 Jun 30;119:108581. doi: 10.1016/j.compbiolchem.2025.108581.
7
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.
8
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.利用基础模型库进行跨设备肿瘤显微镜检查中的细胞相似性搜索。
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.
9
A Weight-Aware-Based Multisource Unsupervised Domain Adaptation Method for Human Motion Intention Recognition.一种基于权重感知的多源无监督域自适应人体运动意图识别方法。
IEEE Trans Cybern. 2025 Jul;55(7):3131-3143. doi: 10.1109/TCYB.2025.3565754.
10
Factors that influence parents' and informal caregivers' views and practices regarding routine childhood vaccination: a qualitative evidence synthesis.影响父母和非正式照顾者对常规儿童疫苗接种看法和做法的因素:定性证据综合分析。
Cochrane Database Syst Rev. 2021 Oct 27;10(10):CD013265. doi: 10.1002/14651858.CD013265.pub2.

本文引用的文献

1
Tensorial multiview low-rank high-order graph learning for context-enhanced domain adaptation.
Neural Netw. 2025 Jan;181:106859. doi: 10.1016/j.neunet.2024.106859. Epub 2024 Nov 2.
2
Dual domain distribution disruption with semantics preservation: Unsupervised domain adaptation for medical image segmentation.双域分布破坏与语义保持:医学图像分割的无监督域自适应。
Med Image Anal. 2024 Oct;97:103275. doi: 10.1016/j.media.2024.103275. Epub 2024 Jul 14.
3
Prediction and visualization of moisture content in Tencha drying processes by computer vision and deep learning.通过计算机视觉和深度学习预测和可视化蒸青干燥过程中的水分含量。
J Sci Food Agric. 2024 Jul;104(9):5486-5494. doi: 10.1002/jsfa.13381. Epub 2024 Feb 27.
4
Label-free detection of trace level zearalenone in corn oil by surface-enhanced Raman spectroscopy (SERS) coupled with deep learning models.通过表面增强拉曼光谱(SERS)结合深度学习模型对玉米油中痕量玉米赤霉烯酮进行无标记检测。
Food Chem. 2023 Jul 15;414:135705. doi: 10.1016/j.foodchem.2023.135705. Epub 2023 Feb 15.