Optimising maize threshing process with temporal proximity soft actor-critic deep reinforcement learning algorithm

被引:0
|
作者
Zhang, Qiang [1 ]
Fang, Xuwen [1 ]
Gao, Xiaodi [1 ,2 ]
Zhang, Jinsong [1 ]
Zhao, Xuelin [1 ]
Yu, Lulu [1 ]
Yu, Chunsheng [1 ]
Zhou, Deyi [1 ]
Zhou, Haigen [1 ]
Zhang, Li [1 ]
Wu, Xinling [1 ]
机构
[1] Jilin Univ, Coll Biol & Agr Engn, Changchun 130022, Peoples R China
[2] Jilin Jianzhu Univ, Sch Emergency Sci & Engn, Changchun 130118, Peoples R China
关键词
Threshing quality optimisation; Agricultural machinery; Machine learning; Agricultural automation; Sensitivity analysis; DAMAGE;
D O I
10.1016/j.biosystemseng.2024.11.001
中图分类号
S2 [农业工程];
学科分类号
0828 ;
摘要
Maize threshing is a crucial process in grain production, and optimising it is essential for reducing post-harvest losses. This study proposes a model-based temporal proximity soft actor-critic (TP-SAC) algorithm to optimise the maize threshing process in the threshing drum. The proposed approach employs an LSTM model as a real-time predictor of threshing quality, achieving an R2 of 97.17% and 98.43% for damage and unthreshed rates on the validation set. In actual threshing experiments, the LSTM model demonstrates an average error of 5.45% and 3.83% for damage and unthreshed rates. The LSTM model is integrated with the TP-SAC algorithm, acting as the environment with which the TP-SAC interacts, enabling efficient training with limited real-world data. The TPSAC algorithm addresses the temporal correlation in the threshing process by incorporating temporal proximity sampling into the SAC algorithm's experience replay mechanism. TP-SAC outperforms the standard SAC algorithm in the simulated environment, demonstrating better sample efficiency and faster convergence. When deployed in actual threshing operations, the TP-SAC algorithm reduces the damage rate by an average of 0.91% across different feed rates compared to constant control. The proposed TP-SAC algorithm offers a novel and practical approach to optimising the maize threshing process, enhancing threshing quality.
引用
收藏
页码:229 / 239
页数:11
相关论文
共 50 条
  • [1] Averaged Soft Actor-Critic for Deep Reinforcement Learning
    Ding, Feng
    Ma, Guanfeng
    Chen, Zhikui
    Gao, Jing
    Li, Peng
    COMPLEXITY, 2021, 2021
  • [2] A deep residual reinforcement learning algorithm based on Soft Actor-Critic for autonomous navigation
    Wen, Shuhuan
    Shu, Yili
    Rad, Ahmad
    Wen, Zeteng
    Guo, Zhengzheng
    Gong, Simeng
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [3] Integrated Actor-Critic for Deep Reinforcement Learning
    Zheng, Jiaohao
    Kurt, Mehmet Necip
    Wang, Xiaodong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 505 - 518
  • [4] A soft actor-critic reinforcement learning algorithm for network intrusion detection
    Li, Zhengfa
    Huang, Chuanhe
    Deng, Shuhua
    Qiu, Wanyu
    Gao, Xieping
    COMPUTERS & SECURITY, 2023, 135
  • [5] A modified actor-critic reinforcement learning algorithm
    Mustapha, SM
    Lachiver, G
    2000 CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, CONFERENCE PROCEEDINGS, VOLS 1 AND 2: NAVIGATING TO A NEW ERA, 2000, : 605 - 609
  • [6] Visual Navigation with Actor-Critic Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Zhu, Yuanheng
    Zhang, Qichao
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [7] Deep Actor-Critic Reinforcement Learning for Anomaly Detection
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [8] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
    Haarnoja, Tuomas
    Zhou, Aurick
    Abbeel, Pieter
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [9] Soft Actor-Critic Deep Reinforcement Learning for Fault-Tolerant Flight Control
    Dally, Killian
    van Kampen, Erik-Jan
    arXiv, 2022,
  • [10] Hardware-in-the-Loop Soft Robotic Testing Framework Using an Actor-Critic Deep Reinforcement Learning Algorithm
    Marquez, Jesus
    Sullivan, Charles
    Price, Ryan M.
    Roberts, Robert C.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 6076 - 6082