Quantum Perturbation Theory Using Tensor Cores and a Deep Neural Network

被引:10
|
作者
Finkelstein, Joshua [2 ]
Rubensson, Emanuel H. [3 ]
Mniszewski, Susan M. [1 ]
Negre, Christian F. A. [2 ]
Niklasson, Anders M. N. [2 ]
机构
[1] Los Alamos Natl Lab, Comp Computat & Stat Sci Div, Los Alamos, NM 87545 USA
[2] Los Alamos Natl Lab, Div Theoret, Los Alamos, NM 87545 USA
[3] Uppsala Univ, Div Comp Sci, Dept Informat Technol, SE-75105 Uppsala, Sweden
关键词
GRAPHICAL PROCESSING UNITS; ELECTRONIC-STRUCTURE CALCULATIONS; TIGHT-BINDING METHOD; DENSITY-MATRIX; HARTREE-FOCK; CHEMISTRY; FACTORIZATION; CONSTRUCTION; POLARIZATION; SIMULATIONS;
D O I
10.1021/acs.jctc.2c00274
中图分类号
O64 [物理化学(理论化学)、化学物理学];
学科分类号
070304 ; 081704 ;
摘要
Time-independent quantum response calculations are performed using Tensor cores. This is achieved by mapping density matrix perturbation theory onto the computational structure of a deep neural network. The main computational cost of each deep layer is dominated by tensor contractions, i.e., dense matrix-matrix multiplications, in mixed-precision arithmetics, which achieves close to peak performance. Quantum response calculations are demonstrated and analyzed using self-consistent charge density-functional tight-binding theory as well as coupled-perturbed Hartree-Fock theory. For linear response calculations, a novel parameter-free convergence criterion is presented that is well-suited for numerically noisy low-precision floating point operations and we demonstrate a peak performance of almost 200 Tflops using the Tensor cores of two Nvidia A100 GPUs.
引用
收藏
页码:4255 / 4268
页数:14
相关论文
共 50 条
  • [31] Chiral perturbation theory for tensor mesons
    Chow, CK
    Rey, SJ
    JOURNAL OF HIGH ENERGY PHYSICS, 1998, (05):
  • [32] Quantum-chemical insights from deep tensor neural networks
    Schuett, Kristof T.
    Arbabzadah, Farhad
    Chmiela, Stefan
    Mueller, Klaus R.
    Tkatchenko, Alexandre
    NATURE COMMUNICATIONS, 2017, 8
  • [33] Quantum-chemical insights from deep tensor neural networks
    Kristof T. Schütt
    Farhad Arbabzadah
    Stefan Chmiela
    Klaus R. Müller
    Alexandre Tkatchenko
    Nature Communications, 8
  • [34] On using alternative perturbation theory methodologies in the evaluation of reactivity effects in ADS cores
    Pelloni, Sandro
    Coddington, Paul
    ANNALS OF NUCLEAR ENERGY, 2006, 33 (17-18) : 1360 - 1367
  • [35] Deep Convolutional Neural Network Compression via Coupled Tensor Decomposition
    Sun, Weize
    Chen, Shaowu
    Huang, Lei
    So, Hing Cheung
    Xie, Min
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2021, 15 (03) : 603 - 616
  • [36] TP: tensor product layer to compress the neural network in deep learning
    Wang Qiang
    Yuwang Ji
    Applied Intelligence, 2022, 52 : 17133 - 17144
  • [37] The Deep Tensor Neural Network With Applications to Large Vocabulary Speech Recognition
    Yu, Dong
    Deng, Li
    Seide, Frank
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2013, 21 (02): : 388 - 396
  • [38] Speech Recognition Based on Deep Tensor Neural Network and Multifactor Feature
    Shan, Yahui
    Liu, Min
    Zhan, Qingran
    Du, Shixuan
    Wang, Jing
    Xie, Xiang
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 650 - 654
  • [39] High-Performance Spectral Tensor Layer for Deep Neural Network
    Zhang, Jie
    Tone, Weiqing
    Liu, Xiao-Yang
    PROCEEDINGS OF THE 2024 IEEE 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT DATA AND SECURITY, IDS 2024, 2024, : 26 - 31
  • [40] TP: tensor product layer to compress the neural network in deep learning
    Qiang, Wang
    Ji, Yuwang
    APPLIED INTELLIGENCE, 2022, 52 (15) : 17133 - 17144