Suppr超能文献

AttentionViz:Transformer注意力机制的全局视图

AttentionViz: A Global View of Transformer Attention.

作者信息

Yeh Catherine, Chen Yida, Wu Aoyu, Chen Cynthia, Viegas Fernanda, Wattenberg Martin

出版信息

IEEE Trans Vis Comput Graph. 2024 Jan;30(1):262-272. doi: 10.1109/TVCG.2023.3327163. Epub 2023 Dec 25.

Abstract

Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allows these models to learn rich, contextual relationships between elements of a sequence. The main idea behind our method is to visualize a joint embedding of the query and key vectors used by transformer models to compute attention. Unlike previous attention visualization techniques, our approach enables the analysis of global patterns across multiple input sequences. We create an interactive visualization tool, AttentionViz (demo: http://attentionviz.com), based on these joint query-key embeddings, and use it to study attention mechanisms in both language and vision transformers. We demonstrate the utility of our approach in improving model understanding and offering new insights about query-key interactions through several application scenarios and expert feedback.

摘要

Transformer模型正在彻底改变机器学习,但它们的内部工作原理仍然神秘。在这项工作中,我们提出了一种新的可视化技术,旨在帮助研究人员理解Transformer中的自注意力机制,该机制使这些模型能够学习序列元素之间丰富的上下文关系。我们方法背后的主要思想是可视化Transformer模型用于计算注意力的查询和键向量的联合嵌入。与以前的注意力可视化技术不同,我们的方法能够分析多个输入序列中的全局模式。我们基于这些联合查询-键嵌入创建了一个交互式可视化工具AttentionViz(演示:http://attentionviz.com),并使用它来研究语言和视觉Transformer中的注意力机制。我们通过几个应用场景和专家反馈展示了我们方法在改善模型理解和提供有关查询-键交互的新见解方面的效用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验