Computing high-degree polynomial gradients in memory

被引:2
|
作者
Bhattacharya, Tinish [1 ]
Hutchinson, George H. [1 ]
Pedretti, Giacomo [2 ]
Sheng, Xia [2 ]
Ignowski, Jim [3 ]
Van Vaerenbergh, Thomas [4 ]
Beausoleil, Ray [4 ]
Strachan, John Paul [5 ,6 ]
Strukov, Dmitri B. [1 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
[2] Hewlett Packard Labs, Artificial Intelligence Res Lab, Milpitas, CA USA
[3] Hewlett Packard Labs, Artificial Intelligence Res Lab, Ft Collins, CO USA
[4] Hewlett Packard Labs, Large Scale Integrated Photon Lab, Milpitas, CA USA
[5] Forschungszentrum Julich, Inst Neuromorph Comp Nodes PGI 14, Peter Grunberg Inst, Julich, Germany
[6] Rhein Westfal TH Aachen, Fac Elect Engn, Aachen, Germany
关键词
NEURAL NETWORKS; OPTIMIZATION; SIGNAL;
D O I
10.1038/s41467-024-52488-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Specialized function gradient computing hardware could greatly improve the performance of state-of-the-art optimization algorithms. Prior work on such hardware, performed in the context of Ising Machines and related concepts, is limited to quadratic polynomials and not scalable to commonly used higher-order functions. Here, we propose an approach for massively parallel gradient calculations of high-degree polynomials, which is conducive to efficient mixed-signal in-memory computing circuit implementations and whose area scales proportionally with the product of the number of variables and terms in the function and, most importantly, independent of its degree. Two flavors of such an approach are proposed. The first is limited to binary-variable polynomials typical in combinatorial optimization problems, while the second type is broader at the cost of a more complex periphery. To validate the former approach, we experimentally demonstrated solving a small-scale third-order Boolean satisfiability problem based on integrated metal-oxide memristor crossbar circuits, with competitive heuristics algorithm. Simulation results for larger-scale, more practical problems show orders of magnitude improvements in area, speed and energy efficiency compared to the state-of-the-art. We discuss how our work could enable even higher-performance systems after co-designing algorithms to exploit massively parallel gradient computation. Current specialized function gradient computing hardware is not scalable to common higher-order functions. This work reports an approach for massively parallel gradient calculations of high-degree polynomials. Solving a Boolean satisfiability problem was experimentally implemented on an in-memory computing circuit.
引用
收藏
页数:11
相关论文
共 50 条