Nakhle Farid, Harfouche Antoine H, Karam Hani, Tserolas Vasileios
Department of Computer Science, Temple University, Japan Campus, Tokyo, Japan.
Unité de Formation et de Recherche en Sciences Économiques, Gestion, Mathématiques, et Informatique, Université Paris Nanterre, Nanterre, France.
Front Comput Neurosci. 2025 Jul 31;19:1638782. doi: 10.3389/fncom.2025.1638782. eCollection 2025.
The energy demands of modern AI systems have reached unprecedented levels, driven by the rapid scaling of deep learning models, including large language models, and the inefficiencies of current computational architectures. In contrast, biological neural systems operate with remarkable energy efficiency, achieving complex computations while consuming orders of magnitude less power. A key mechanism enabling this efficiency is subthreshold processing, where neurons perform computations through graded, continuous signals below the spiking threshold, reducing energy costs. Despite its significance in biological systems, subthreshold processing remains largely overlooked in AI design. This perspective explores how principles of subthreshold dynamics can inspire the design of novel AI architectures and computational methods as a step toward advancing TinyAI. We propose pathways such as algorithmic analogs of subthreshold integration, including graded activation functions, dendritic-inspired hierarchical processing, and hybrid analog-digital systems to emulate the energy-efficient operations of biological neurons. We further explore neuromorphic and compute-in-memory hardware platforms that could support these operations, and propose a design stack aligned with the efficiency and adaptability of the brain. By integrating subthreshold dynamics into AI architecture, this work provides a roadmap toward sustainable, responsive, and accessible intelligence for resource-constrained environments.
现代人工智能系统的能源需求已达到前所未有的水平,这是由深度学习模型(包括大语言模型)的快速扩展以及当前计算架构的低效率所驱动的。相比之下,生物神经系统以极高的能源效率运行,在消耗的功率低几个数量级的情况下实现复杂的计算。实现这种效率的一个关键机制是阈下处理,即神经元通过低于发放阈值的分级、连续信号进行计算,从而降低能源成本。尽管阈下处理在生物系统中具有重要意义,但在人工智能设计中仍 largely 被忽视。本文探讨了阈下动力学原理如何启发新型人工智能架构和计算方法的设计,作为推进 TinyAI 的一步。我们提出了诸如阈下积分的算法类似物等途径,包括分级激活函数、受树突启发的分层处理以及混合模拟 - 数字系统,以模拟生物神经元的节能操作。我们进一步探索了可以支持这些操作的神经形态和内存计算硬件平台,并提出了一个与大脑的效率和适应性相一致的设计堆栈。通过将阈下动力学集成到人工智能架构中,这项工作为资源受限环境下实现可持续、响应式和可访问的智能提供了路线图。