• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Core-Periphery Principle Guided Redesign of Self-Attention in Transformers.核心-外围原则指导下的Transformer中自注意力机制的重新设计
ArXiv. 2023 Mar 27:arXiv:2303.15569v1.
2
A Unified and Biologically Plausible Relational Graph Representation of Vision Transformers.视觉Transformer的统一且具有生物学合理性的关系图表示
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3231-3243. doi: 10.1109/TNNLS.2023.3342810. Epub 2025 Feb 6.
3
RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers.RT-ViT:基于轻量级视觉Transformer 的实时单目深度估计。
Sensors (Basel). 2022 May 19;22(10):3849. doi: 10.3390/s22103849.
4
Eye-Gaze-Guided Vision Transformer for Rectifying Shortcut Learning.眼动引导视觉Transformer 用于纠正捷径学习。
IEEE Trans Med Imaging. 2023 Nov;42(11):3384-3394. doi: 10.1109/TMI.2023.3287572. Epub 2023 Oct 27.
5
Rethinking Attention Mechanisms in Vision Transformers with Graph Structures.利用图结构重新思考视觉Transformer中的注意力机制。
Sensors (Basel). 2024 Feb 8;24(4):1111. doi: 10.3390/s24041111.
6
Gait-ViT: Gait Recognition with Vision Transformer.步态-ViT:基于视觉Transformer 的步态识别。
Sensors (Basel). 2022 Sep 28;22(19):7362. doi: 10.3390/s22197362.
7
Vision Transformers in Image Restoration: A Survey.视觉Transformer 在图像恢复中的应用综述
Sensors (Basel). 2023 Feb 21;23(5):2385. doi: 10.3390/s23052385.
8
ViT-MVT: A Unified Vision Transformer Network for Multiple Vision Tasks.ViT-MVT:用于多视觉任务的统一视觉Transformer网络。
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):3027-3041. doi: 10.1109/TNNLS.2023.3342141. Epub 2025 Feb 6.
9
Rectify ViT Shortcut Learning by Visual Saliency.通过视觉显著性纠正视觉Transformer的捷径学习
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):18013-18025. doi: 10.1109/TNNLS.2023.3310531. Epub 2024 Dec 2.
10
BUViTNet: Breast Ultrasound Detection via Vision Transformers.BUViTNet:通过视觉Transformer进行乳腺超声检测
Diagnostics (Basel). 2022 Nov 1;12(11):2654. doi: 10.3390/diagnostics12112654.

引用本文的文献

1
Empowering Graph Neural Network-Based Computational Drug Repositioning with Large Language Model-Inferred Knowledge Representation.利用基于大语言模型推理的知识表示增强基于图神经网络的计算药物重新定位
Interdiscip Sci. 2024 Sep 26. doi: 10.1007/s12539-024-00654-7.

核心-外围原则指导下的Transformer中自注意力机制的重新设计

Core-Periphery Principle Guided Redesign of Self-Attention in Transformers.

作者信息

Yu Xiaowei, Zhang Lu, Dai Haixing, Lyu Yanjun, Zhao Lin, Wu Zihao, Liu David, Liu Tianming, Zhu Dajiang

出版信息

ArXiv. 2023 Mar 27:arXiv:2303.15569v1.

PMID:37033455
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10081348/
Abstract

Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques. Previous studies, by post-hoc analysis, have found that the best-performing ANNs surprisingly resemble biological neural networks (BNN), which indicates that ANNs and BNNs may share some common principles to achieve optimal performance in either machine learning or cognitive/behavior tasks. Inspired by this phenomenon, we proactively instill organizational principles of BNNs to guide the redesign of ANNs. We leverage the Core-Periphery (CP) organization, which is widely found in human brain networks, to guide the information communication mechanism in the self-attention of vision transformer (ViT) and name this novel framework as CP-ViT. In CP-ViT, the attention operation between nodes is defined by a sparse graph with a Core-Periphery structure (CP graph), where the core nodes are redesigned and reorganized to play an integrative role and serve as a center for other periphery nodes to exchange information. We evaluated the proposed CP-ViT on multiple public datasets, including medical image datasets (INbreast) and natural image datasets. Interestingly, by incorporating the BNN-derived principle (CP structure) into the redesign of ViT, our CP-ViT outperforms other state-of-the-art ANNs. In general, our work advances the state of the art in three aspects: 1) This work provides novel insights for brain-inspired AI: we can utilize the principles found in BNNs to guide and improve our ANN architecture design; 2) We show that there exist sweet spots of CP graphs that lead to CP-ViTs with significantly improved performance; and 3) The core nodes in CP-ViT correspond to task-related meaningful and important image patches, which can significantly enhance the interpretability of the trained deep model.

摘要

设计更高效、可靠且可解释的神经网络架构对于基于人工智能(AI)技术的研究至关重要。先前的研究通过事后分析发现,性能最佳的人工神经网络(ANN)惊人地类似于生物神经网络(BNN),这表明ANN和BNN可能共享一些共同原则,以便在机器学习或认知/行为任务中实现最佳性能。受此现象启发,我们主动灌输BNN的组织原则来指导ANN的重新设计。我们利用在人类大脑网络中广泛发现的核心-外围(CP)组织来指导视觉Transformer(ViT)自注意力中的信息通信机制,并将这个新颖的框架命名为CP-ViT。在CP-ViT中,节点之间的注意力操作由具有核心-外围结构的稀疏图(CP图)定义,其中核心节点经过重新设计和重组以发挥整合作用,并作为其他外围节点交换信息的中心。我们在多个公共数据集上评估了所提出的CP-ViT,包括医学图像数据集(INbreast)和自然图像数据集。有趣的是,通过将源自BNN的原则(CP结构)纳入ViT的重新设计中,我们的CP-ViT优于其他先进的ANN。总体而言,我们的工作在三个方面推动了技术发展:1)这项工作为受大脑启发的AI提供了新颖的见解:我们可以利用BNN中发现的原则来指导和改进我们的ANN架构设计;2)我们表明存在CP图的最佳点,这些点会导致CP-ViT的性能显著提高;3)CP-ViT中的核心节点对应于与任务相关的有意义且重要的图像块,这可以显著增强训练后的深度模型的可解释性。