Reinforcement Learning Optimization of the Charging of a Dicke Quantum Battery

被引:2
|
作者
Erdman, Paolo Andrea [1 ]
Andolina, Gian Marcello [2 ,3 ]
Giovannetti, Vittorio [4 ,5 ]
Noe, Frank [1 ,6 ,7 ,8 ]
机构
[1] Free Univ Berlin, Dept Math & Comp Sci, Arnimallee 6, D-14195 Berlin, Germany
[2] Barcelona Inst Sci & Technol, ICFO Inst Ciencies Foton, Av Carl Friedrich Gauss 3, Castelldefels 08860, Barcelona, Spain
[3] PSL Res Univ, Coll France, JEIP, UAR 3573,CNRS, F-75321 Paris, France
[4] Scuola Normale Super Pisa, NEST, I-56126 Pisa, Italy
[5] CNR, Ist Nanosci, I-56126 Pisa, Italy
[6] Microsoft Res AI4Sci, Karl Liebknecht Str 32, D-10178 Berlin, Germany
[7] Free Univ Berlin, Dept Phys, Arnimallee 6, D-14195 Berlin, Germany
[8] Rice Univ, Dept Chem, Houston, TX 77005 USA
基金
欧洲研究理事会;
关键词
DYNAMICS;
D O I
10.1103/PhysRevLett.133.243602
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum batteries are energy-storing devices, governed by quantum mechanics, that promise high charging performance thanks to collective effects. Because of its experimental feasibility, the Dicke battery -which comprises N two-level systems coupled to a common photon mode-is one of the most promising designs for quantum batteries. However, the chaotic nature of the model severely hinders the extractable energy (ergotropy). Here, we use reinforcement learning to optimize the charging process of a Dicke battery either by modulating the coupling strength, or the system-cavity detuning. We find that the ergotropy and quantum mechanical energy fluctuations (charging precision) can be greatly improved with respect to standard charging strategies by countering the detrimental effect of quantum chaos. Notably, the collective speedup of the charging time can be preserved even when nearly fully charging the battery.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] Variational quantum reinforcement learning via evolutionary optimization
    Chen, Samuel Yen-Chi
    Huang, Chih-Min
    Hsing, Chia-Wei
    Goan, Hsi-Sheng
    Kao, Ying-Jer
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2022, 3 (01):
  • [22] Reinforcement learning for optimization of variational quantum circuit architectures
    Ostaszewski, Mateusz
    Trenkwalder, Lea M.
    Masarczyk, Wojciech
    Scerri, Eleanor
    Dunjko, Vedran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] Compiler Optimization for Quantum Computing Using Reinforcement Learning
    Quetschlich, Nils
    Burgholzer, Lukas
    Wille, Robert
    2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [24] Deep reinforcement learning for quantum Szilard engine optimization
    Sordal, Vegard B.
    Bergli, Joakim
    PHYSICAL REVIEW A, 2019, 100 (04)
  • [25] Comparing different operating regimes of a Dicke quantum battery
    Gemme, Giulia
    Sassetti, Maura
    Ferraro, Dario
    INTERNATIONAL JOURNAL OF QUANTUM INFORMATION, 2024, 22 (06)
  • [26] Optimal charging of a superconducting quantum battery
    Hu, Chang-Kang
    Qiu, Jiawei
    Souza, Paulo J. P.
    Yuan, Jiahao
    Zhou, Yuxuan
    Zhang, Libo
    Chu, Ji
    Pan, Xianchuang
    Hu, Ling
    Li, Jian
    Xu, Yuan
    Zhong, Youpeng
    Liu, Song
    Yan, Fei
    Tan, Dian
    Bachelard, R.
    Villas-Boas, C. J.
    Santos, Alan C.
    Yu, Dapeng
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (04)
  • [27] Powerful harmonic charging in a quantum battery
    Zhang, Yu-Yu
    Yang, Tian-Ran
    Fu, Libin
    Wang, Xiaoguang
    PHYSICAL REVIEW E, 2019, 99 (05)
  • [28] Adaptive Duty Cycle Control for Optimal Battery Energy Storage System Charging by Reinforcement Learning
    Wiencek, Richard
    Ghosh, Sagnika
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 8 - 10
  • [29] Efficient Routing and Charging Strategy for Electric Vehicles Considering Battery Life: A Reinforcement Learning Approach
    Ebrahimi, Dariush
    Kashefi, Seyedmohammad
    de Oliveira, Thiago E. Alves
    Alzhouri, Fadi
    2024 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, CCECE 2024, 2024, : 429 - 435
  • [30] Control of battery charging based on reinforcement learning and long short-term memory networks
    Chang, Fangyuan
    Chen, Tao
    Su, Wencong
    Alsafasfeh, Qais
    COMPUTERS & ELECTRICAL ENGINEERING, 2020, 85