Suppr超能文献

Variational Distillation for Multi-View Learning.

作者信息

Tian Xudong, Zhang Zhizhong, Wang Cong, Zhang Wensheng, Qu Yanyun, Ma Lizhuang, Wu Zongze, Xie Yuan, Tao Dacheng

出版信息

IEEE Trans Pattern Anal Mach Intell. 2024 Jul;46(7):4551-4566. doi: 10.1109/TPAMI.2023.3343717. Epub 2024 Jun 5.

Abstract

Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV D), tackling the above limitations for generalized multi-view learning. Uniquely, MV D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle.

摘要

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验