AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning

被引:16
|
作者
Tallamraju, Rahul [1 ]
Saini, Nitin [1 ]
Bonetto, Elia [1 ]
Pabst, Michael [1 ]
Liu, Yu Tang [1 ]
Black, Michael J. [1 ]
Ahmad, Aamir [1 ,2 ]
机构
[1] Max Planck Inst Intelligent Syst, Tubingen, Germany
[2] Univ Stuttgart, Dept Aerosp Engn & Geodesy, Stuttgart, Germany
关键词
Reinforecment learning; aerial systems; perception and autonomy; multi-robot systems; visual tracking;
D O I
10.1109/LRA.2020.3013906
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose, and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system, and observation models. Such models are difficult to derive, and generalize across different systems. Moreover, the non-linearities, and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.
引用
收藏
页码:6678 / 6685
页数:8
相关论文
共 50 条
  • [21] Autonomous Obstacle Avoidance Algorithm for Unmanned Aerial Vehicles Based on Deep Reinforcement Learning
    Gao, Yuan
    Ren, Ling
    Shi, Tianwei
    Xu, Teng
    Ding, Jianbang
    ENGINEERING LETTERS, 2024, 32 (03) : 650 - 660
  • [22] Distributed deep reinforcement learning for autonomous aerial eVTOL mobility in drone taxi applications
    Yun, Won Joon
    Jung, Soyi
    Kim, Joongheon
    Kim, Jae-Hyun
    ICT EXPRESS, 2021, 7 (01): : 1 - 4
  • [23] Quadrotor motion control using deep reinforcement learning
    Jiang, Zifei
    Lynch, Alan F.
    JOURNAL OF UNMANNED VEHICLE SYSTEMS, 2021, 9 (04) : 234 - 251
  • [24] Local motion simulation using deep reinforcement learning
    Xu, Dong
    Huang, Xiao
    Li, Zhenlong
    Li, Xiang
    TRANSACTIONS IN GIS, 2020, 24 (03) : 756 - 779
  • [25] A Survey of Deep Reinforcement Learning Algorithms for Motion Planning and Control of Autonomous Vehicles
    Ye, Fei
    Zhang, Shen
    Wang, Pin
    Chan, Ching-Yao
    2021 32ND IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2021, : 1073 - 1080
  • [26] Autonomous Household Energy Management Using Deep Reinforcement Learning
    Tsang, Nathan
    Cao, Collin
    Wu, Serena
    Yan, Zilin
    Yousefi, Ashkan
    Fred-Ojala, Alexander
    Sidhu, Ikhlaq
    2019 IEEE INTERNATIONAL CONFERENCE ON ENGINEERING, TECHNOLOGY AND INNOVATION (ICE/ITMC), 2019,
  • [27] Autonomous Emergency Landing for Multicopters using Deep Reinforcement Learning
    Bartolomei, Luca
    Kompis, Yves
    Teixeira, Lucas
    Chli, Margarita
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 3392 - 3399
  • [28] Steering control in autonomous vehicles using deep reinforcement learning
    Xue Chong
    Zhang Xinyu
    Jia Peng
    The Journal of China Universities of Posts and Telecommunications, 2018, 25 (06) : 58 - 64
  • [29] Autonomous Navigation and Control of a Quadrotor Using Deep Reinforcement Learning
    Mokhtar, Mohamed
    El-Badawy, Ayman
    2023 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS, 2023, : 1045 - 1052
  • [30] Steering control in autonomous vehicles using deep reinforcement learning
    Chong X.
    Peng J.
    Xinyu Z.
    Peng, Jia (jiapeng1018@163.com), 2018, Beijing University of Posts and Telecommunications (25): : 58 - 64