Reinforcement Learning Optimization of the Charging of a Dicke Quantum Battery

被引:2
|
作者
Erdman, Paolo Andrea [1 ]
Andolina, Gian Marcello [2 ,3 ]
Giovannetti, Vittorio [4 ,5 ]
Noe, Frank [1 ,6 ,7 ,8 ]
机构
[1] Free Univ Berlin, Dept Math & Comp Sci, Arnimallee 6, D-14195 Berlin, Germany
[2] Barcelona Inst Sci & Technol, ICFO Inst Ciencies Foton, Av Carl Friedrich Gauss 3, Castelldefels 08860, Barcelona, Spain
[3] PSL Res Univ, Coll France, JEIP, UAR 3573,CNRS, F-75321 Paris, France
[4] Scuola Normale Super Pisa, NEST, I-56126 Pisa, Italy
[5] CNR, Ist Nanosci, I-56126 Pisa, Italy
[6] Microsoft Res AI4Sci, Karl Liebknecht Str 32, D-10178 Berlin, Germany
[7] Free Univ Berlin, Dept Phys, Arnimallee 6, D-14195 Berlin, Germany
[8] Rice Univ, Dept Chem, Houston, TX 77005 USA
基金
欧洲研究理事会;
关键词
DYNAMICS;
D O I
10.1103/PhysRevLett.133.243602
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum batteries are energy-storing devices, governed by quantum mechanics, that promise high charging performance thanks to collective effects. Because of its experimental feasibility, the Dicke battery -which comprises N two-level systems coupled to a common photon mode-is one of the most promising designs for quantum batteries. However, the chaotic nature of the model severely hinders the extractable energy (ergotropy). Here, we use reinforcement learning to optimize the charging process of a Dicke battery either by modulating the coupling strength, or the system-cavity detuning. We find that the ergotropy and quantum mechanical energy fluctuations (charging precision) can be greatly improved with respect to standard charging strategies by countering the detrimental effect of quantum chaos. Notably, the collective speedup of the charging time can be preserved even when nearly fully charging the battery.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] A Constraint-Based Routing and Charging Methodology for Battery Electric Vehicles With Deep Reinforcement Learning
    Zhang, Ying
    Li, Muyang
    Chen, Yuanchang
    Chiang, Yao-Yi
    Hua, Yunpeng
    IEEE TRANSACTIONS ON SMART GRID, 2023, 14 (03) : 2446 - 2459
  • [32] Optimizing EV Battery Management: Advanced Hybrid Reinforcement Learning Models for Efficient Charging and Discharging
    Yalcin, Sercan
    Herdem, Muenuer Sacit
    ENERGIES, 2024, 17 (12)
  • [33] A Study on Optimization Techniques for Variational Quantum Circuits in Reinforcement Learning
    Koelle, Michael
    Witter, Timo
    Rohe, Tobias
    Stenzel, Gerhard
    Altmann, Philipp
    Gabor, Thomas
    2024 IEEE INTERNATIONAL CONFERENCE ON QUANTUM SOFTWARE, IEEE QSW 2024, 2024, : 157 - 167
  • [34] Generalized autonomous optimization for quantum transmitters with deep reinforcement learning
    Lo, Yuen San
    Woodward, Robert I.
    Paraiso, Taofiq K.
    Poudel, Rudra P. K.
    Shields, Andrew J.
    QUANTUM COMPUTING, COMMUNICATION, AND SIMULATION IV, 2024, 12911
  • [35] Extended Dicke quantum battery with interatomic interactions and driving field
    Dou, Fu-Quan
    Lu, You-Qi
    Wang, Yuan-Jin
    Sun, Jian-An
    PHYSICAL REVIEW B, 2022, 105 (11)
  • [36] Quantum Dicke battery supercharging in the bound-luminosity state
    Seidov, S. S.
    Mukhin, S. I.
    PHYSICAL REVIEW A, 2024, 109 (02)
  • [37] Motion Planning using Reinforcement Learning for Electric Vehicle Battery Optimization(EVBO)
    Soni, Himanshu
    Gupta, Vishu
    Kumar, Rajesh
    2019 INTERNATIONAL CONFERENCE ON POWER ELECTRONICS, CONTROL AND AUTOMATION (ICPECA-2019), 2019, : 11 - 16
  • [38] Input Excitation Optimization for Estimating Battery Electrochemical Parameters using Reinforcement Learning
    Huang, Rui
    Fogelquist, Jackson
    Lin, Xinfan
    2022 IEEE VEHICLE POWER AND PROPULSION CONFERENCE (VPPC), 2022,
  • [39] Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach
    Azzouz, Imen
    Fekih Hassen, Wiem
    ENERGIES, 2023, 16 (24)
  • [40] Vacuum-enhanced charging of a quantum battery
    Santos, Tiago F. F.
    Almeida, Yohan Vianna de
    Santos, Marcelo F.
    PHYSICAL REVIEW A, 2023, 107 (03)