Suppr超能文献

在内存中计算高阶多项式梯度。

Computing high-degree polynomial gradients in memory.

作者信息

Bhattacharya Tinish, Hutchinson George H, Pedretti Giacomo, Sheng Xia, Ignowski Jim, Van Vaerenbergh Thomas, Beausoleil Ray, Strachan John Paul, Strukov Dmitri B

机构信息

Department of Electrical and Computer Engineering, University of California at Santa Barbara, Santa Barbara, CA, USA.

Artificial Intelligence Research Lab, Hewlett Packard Labs, Milpitas, CA, USA.

出版信息

Nat Commun. 2024 Sep 18;15(1):8211. doi: 10.1038/s41467-024-52488-y.

Abstract

Specialized function gradient computing hardware could greatly improve the performance of state-of-the-art optimization algorithms. Prior work on such hardware, performed in the context of Ising Machines and related concepts, is limited to quadratic polynomials and not scalable to commonly used higher-order functions. Here, we propose an approach for massively parallel gradient calculations of high-degree polynomials, which is conducive to efficient mixed-signal in-memory computing circuit implementations and whose area scales proportionally with the product of the number of variables and terms in the function and, most importantly, independent of its degree. Two flavors of such an approach are proposed. The first is limited to binary-variable polynomials typical in combinatorial optimization problems, while the second type is broader at the cost of a more complex periphery. To validate the former approach, we experimentally demonstrated solving a small-scale third-order Boolean satisfiability problem based on integrated metal-oxide memristor crossbar circuits, with competitive heuristics algorithm. Simulation results for larger-scale, more practical problems show orders of magnitude improvements in area, speed and energy efficiency compared to the state-of-the-art. We discuss how our work could enable even higher-performance systems after co-designing algorithms to exploit massively parallel gradient computation.

摘要

专用函数梯度计算硬件可以极大地提高当前最先进优化算法的性能。此前在这种硬件上开展的工作,是在伊辛机及相关概念的背景下进行的,仅限于二次多项式,无法扩展到常用的高阶函数。在此,我们提出一种用于高阶多项式大规模并行梯度计算的方法,该方法有利于高效的混合信号内存计算电路实现,其面积与函数中变量数量和项数的乘积成比例,并且最重要的是,与函数的次数无关。我们提出了这种方法的两种形式。第一种仅限于组合优化问题中典型的二元变量多项式,而第二种类型范围更广,但外围电路更复杂。为了验证前一种方法,我们基于集成金属氧化物忆阻器交叉阵列电路,通过实验展示了如何解决小规模三阶布尔可满足性问题,与具有竞争力的启发式算法相比。针对更大规模、更实际问题的仿真结果表明,与当前最先进技术相比,在面积、速度和能源效率方面有数量级的提升。我们讨论了在协同设计算法以利用大规模并行梯度计算之后,我们的工作如何能够实现性能更高的系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4368/11411077/f3c1ab8a098d/41467_2024_52488_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验