Visual Spatial Attention and Proprioceptive Data-Driven Reinforcement Learning for Robust Peg-in-Hole Task Under Variable Conditions

被引:12
|
作者
Yasutomi, Andre Yuji [1 ]
Ichiwara, Hideyuki [1 ]
Ito, Hiroshi [1 ]
Mori, Hiroki [2 ]
Ogata, Tetsuya [3 ,4 ]
机构
[1] Hitachi Ltd, R&D Grp, Hitachinaka 3120034, Japan
[2] Waseda Univ, Future Robot Org, Tokyo 1698555, Japan
[3] Waseda Univ, Grad Sch Fundamental Sci & Engn, Tokyo 1698555, Japan
[4] Waseda Univ, Waseda Res Inst Sci & Engn WISE, Tokyo 1698555, Japan
关键词
Robotics and automation in construction; reinforcement learning; deep learning for visual perception;
D O I
10.1109/LRA.2023.3243526
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Anchor-bolt insertion is a peg-in-hole task performed in the construction field for holes in concrete. Efforts have been made to automate this task, but the variable lighting and hole surface conditions, as well as the requirements for short setup and task execution time make the automation challenging. In this study, we introduce a vision and proprioceptive data-driven robot control model for this task that is robust to challenging lighting and hole surface conditions. This model consists of a spatial attention point network (SAP) and a deep reinforcement learning (DRL) policy that are trained jointly end-to-end to control the robot. The model is trained in an offline manner, with a sample-efficient framework designed to reduce training time and minimize the reality gap when transferring the model to the physical world. Through evaluations with an industrial robot performing the task in 12 unknown holes, starting from 16 different initial positions, and under three different lighting conditions (two with misleading shadows), we demonstrate that SAP can generate relevant attention points of the image even in challenging lighting conditions. We also show that the proposed model enables task execution with higher success rate and shorter task completion time than various baselines. Due to the proposed model's high effectiveness even in severe lighting, initial positions, and hole conditions, and the offline training framework's high sample-efficiency and short training time, this approach can be easily applied to construction.
引用
收藏
页码:1834 / 1841
页数:8
相关论文
共 41 条
  • [11] Asynchronous Deep Reinforcement Learning for Data-Driven Task Offloading in MEC-Empowered Vehicular Networks
    Dai, Penglin
    Hu, Kaiwen
    Wu, Xiao
    Xing, Huanlai
    Yu, Zhaofei
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [12] Data-Driven Load Frequency Control Based on Multi-Agent Reinforcement Learning With Attention Mechanism
    Yang, Fan
    Huang, DongHua
    Li, Dongdong
    Lin, Shunfu
    Muyeen, S. M.
    Zhai, Haibao
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (06) : 5560 - 5569
  • [13] Machine learning-based data-driven robust optimization approach under uncertainty
    Zhang, Chenhan
    Wang, Zhenlei
    Wang, Xin
    JOURNAL OF PROCESS CONTROL, 2022, 115 : 1 - 11
  • [14] Robust Data-driven Model Predictive Control via On-policy Reinforcement Learning for Robot Manipulators
    Lu, Tianxiang
    Zhang, Kunwu
    Shi, Yang
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON INDUSTRIAL CYBER-PHYSICAL SYSTEMS, ICPS 2024, 2024,
  • [15] Data-driven source term estimation of hazardous gas leakages under variable meteorological conditions
    Ni, Chuantao
    Lang, Ziqiang
    Wang, Bing
    Li, Ang
    Cao, Chenxi
    Du, Wenli
    Qian, Feng
    JOURNAL OF LOSS PREVENTION IN THE PROCESS INDUSTRIES, 2025, 94
  • [16] Data-driven control of wind turbine under online power strategy via deep learning and reinforcement learning
    Li, Tenghui
    Yang, Jin
    Ioannou, Anastasia
    RENEWABLE ENERGY, 2024, 234
  • [17] Data-Driven Feedforward Learning With Force Ripple Compensation for Wafer Stages: A Variable-Gain Robust Approach
    Song, Fazhi
    Liu, Yang
    Jin, Wen
    Tan, Jiubin
    He, Wei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1594 - 1608
  • [18] PEM fuel cell prognostics under variable load: a data-driven ensemble with new incremental learning
    Javed, Kamran
    Gouriveau, Rafael
    Zerhouni, Noureddine
    Hissel, Daniel
    2016 INTERNATIONAL CONFERENCE ON CONTROL, DECISION AND INFORMATION TECHNOLOGIES (CODIT), 2016, : 252 - 257
  • [19] Data-Driven Reinforcement Learning Tracking of MASs Under Injection Attack: A Controller-Dynamic-Linearization Approach
    Sun, Shanshan
    Li, Yuan-Xin
    Hou, Zhongsheng
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (11) : 6069 - 6078
  • [20] Data-Driven H∞ Output Consensus for Heterogeneous Multiagent Systems Under Switching Topology via Reinforcement Learning
    Liu, Qiwei
    Yan, Huaicheng
    Zhang, Hao
    Wang, Meng
    Tian, Yongxiao
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, : 7865 - 7876