Deep reinforcement learning-based digital twin for droplet microfluidics control

被引:1
|
作者
Gyimah, Nafisat [1 ]
Scheler, Ott [2 ]
Rang, Toomas [2 ]
Pardy, Tamas [2 ]
机构
[1] Tallinn Univ Technol, Thomas Johann Seebeck Dept Elect, Tallinn, Estonia
[2] Tallinn Univ Technol, Dept Chem & Biotechnol, Tallinn, Estonia
关键词
SIMULATION;
D O I
10.1063/5.0159981
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
This study applied deep reinforcement learning (DRL) with the Proximal Policy Optimization (PPO) algorithm within a two-dimensional computational fluid dynamics (CFD) model to achieve closed-loop control in microfluidics. The objective was to achieve the desired droplet size with minimal variability in a microfluidic capillary flow-focusing device. An artificial neural network was utilized to map sensing signals (flow pressure and droplet size) to control actions (continuous phase inlet pressure). To validate the numerical model, simulation results were compared with experimental data, which demonstrated a good agreement with errors below 11%. The PPO algorithm effectively controlled droplet size across various targets (50, 60, 70, and 80 mu m) with different levels of precision. The optimized DRL + CFD framework successfully achieved droplet size control within a coefficient of variation (CV%) below 5% for all targets, outperforming the case without control. Furthermore, the adaptability of the PPO agent to external disturbances was extensively evaluated. By subjecting the system to sinusoidal mechanical vibrations with frequencies ranging from 10 Hz to 10 KHz and amplitudes between 50 and 500 Pa, the PPO algorithm demonstrated efficacy in handling disturbances within limits, highlighting its robustness. Overall, this study showcased the implementation of the DRL+CFD framework for designing and investigating novel control algorithms, advancing the field of droplet microfluidics control research.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] A Reinforcement Learning-based Adaptive Digital Twin Model for Forests
    Damasevicius, Robertas
    Maskeliunas, Rytis
    2024 4TH INTERNATIONAL CONFERENCE ON APPLIED ARTIFICIAL INTELLIGENCE, ICAPAI, 2024, : 1 - 7
  • [2] Digital Twin and Reinforcement Learning-Based Resilient Production Control for Micro Smart Factory
    Park, Kyu Tae
    Son, Yoo Ho
    Ko, Sang Wook
    Noh, Sang Do
    APPLIED SCIENCES-BASEL, 2021, 11 (07):
  • [3] Digital Twin-Driven Formation Control of ROVs: An Integral Reinforcement Learning-Based Solution
    Zhang, Tianyi
    Yan, Jing
    Yang, Xian
    Chen, Cailian
    Luo, Xiaoyuan
    Guan, Xinping
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (12) : 14277 - 14286
  • [4] Deep reinforcement learning-based energy management system enhancement using digital twin for electric vehicles
    Ye, Yiming
    Xu, Bin
    Wang, Hanchen
    Zhang, Jiangfeng
    Lawler, Benjamin
    Ayalew, Beshah
    ENERGY, 2024, 312
  • [5] Literacy Deep Reinforcement Learning-Based Federated Digital Twin Scheduling for the Software-Defined Factory
    Ahn, Jangsu
    Yun, Seongjin
    Kwon, Jin-Woo
    Kim, Won-Tae
    ELECTRONICS, 2024, 13 (22)
  • [6] Deep Reinforcement Learning-based Traffic Signal Control
    Ruan, Junyun
    Tang, Jinzhuo
    Gao, Ge
    Shi, Tianyu
    Khamis, Alaa
    2023 IEEE INTERNATIONAL CONFERENCE ON SMART MOBILITY, SM, 2023, : 21 - 26
  • [7] Droplet menisci recognition by deep learning for digital microfluidics applications
    Danesh, Negar
    Torabinia, Matin
    Moon, Hyejin
    DROPLET, 2025, 4 (01):
  • [8] Digital-Twin-Based Deep Reinforcement Learning Approach for Adaptive Traffic Signal Control
    Kamal, Hani
    Yanez, Wendy
    Hassan, Sara
    Sobhy, Dalia
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 21946 - 21953
  • [9] Deep Learning-based Droplet Menisci Recognition for Digital Microfluidic Devices
    Danesh, Negar
    Torabinia, Matin
    Moon, Hyejin
    2023 IEEE SENSORS, 2023,
  • [10] RETRACTED ARTICLE: Deep Reinforcement Learning-Based Smart Manufacturing Plants with a Novel Digital Twin Training Model
    Minghong She
    Wireless Personal Communications, 2022, 127 : 39 - 39