3D human pose detection using nano sensor and multi-agent deep reinforcement learning

被引:2
|
作者
Sun, Yangjie [1 ]
Che, Xiaoxi [1 ]
Zhang, Nan [1 ]
机构
[1] Beijing Univ Technol, Dept Phys Educ, Beijing 100124, Peoples R China
关键词
pose detection; EMG signal; feature extraction; nano sensor; multi-agent deep reinforcement learning; pose solution; ACTION RECOGNITION; POSTURE DETECTION; NETWORK; HYBRID;
D O I
10.3934/mbe.2023230
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Due to the complexity of three-dimensional (3D) human pose, it is difficult for ordinary sensors to capture subtle changes in pose, resulting in a decrease in the accuracy of 3D human pose detection. A novel 3D human motion pose detection method is designed by combining Nano sensors and multi-agent deep reinforcement learning technology. First, Nano sensors are placed in key parts of the human to collect human electromyogram (EMG) signals. Second, after de-noising the EMG signal by blind source separation technology, the time-domain and frequency-domain features of the surface EMG signal are extracted. Finally, in the multi-agent environment, the deep reinforcement learning network is introduced to build the multi-agent deep reinforcement learning pose detection model, and the 3D local pose of the human is output according to the features of the EMG signal. The fusion and pose calculation of the multi-sensor pose detection results are performed to obtain the 3D human pose detection results. The results show that the proposed method has high accuracy for detecting various human poses, and the accuracy, precision, recall and specificity of 3D human pose detection results are 0.97, 0.98, 0.95 and 0.98, respectively. Compared with other methods, the detection results in this paper are more accurate, and can be widely used in medicine, film, sports and other fields.
引用
收藏
页码:4970 / 4987
页数:18
相关论文
共 50 条
  • [31] Strategic Interaction Multi-Agent Deep Reinforcement Learning
    Zhou, Wenhong
    Li, Jie
    Chen, Yiting
    Shen, Lin-Cheng
    IEEE Access, 2020, 8 : 119000 - 119009
  • [32] Multi-Agent Deep Reinforcement Learning in Vehicular OCC
    Islam, Amirul
    Musavian, Leila
    Thomos, Nikolaos
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [33] Teaching on a Budget in Multi-Agent Deep Reinforcement Learning
    Ilhan, Ercument
    Gow, Jeremy
    Perez-Liebana, Diego
    2019 IEEE CONFERENCE ON GAMES (COG), 2019,
  • [34] Research Progress of Multi-Agent Deep Reinforcement Learning
    Ding, Shi-Feiu
    Du, Weiu
    Zhang, Jianu
    Guo, Li-Liu
    Ding, Ding
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (07): : 1547 - 1567
  • [35] Agent Coordination in Air Combat Simulation using Multi-Agent Deep Reinforcement Learning
    Kallstrom, Johan
    Heintz, Fredrik
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 2157 - 2164
  • [36] Multi-Agent Deep Reinforcement Learning for Packet Routing in Tactical Mobile Sensor Networks
    Okine, Andrews A.
    Adam, Nadir
    Naeem, Faisal
    Kaddoum, Georges
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (02): : 2155 - 2169
  • [37] Integrated and Fungible Scheduling of Deep Learning Workloads Using Multi-Agent Reinforcement Learning
    Li, Jialun
    Xiao, Danyang
    Yang, Diying
    Mo, Xuan
    Wu, Weigang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2025, 36 (03) : 391 - 406
  • [38] Human-Centered AI using Ethical Causality and Learning Representation for Multi-Agent Deep Reinforcement Learning
    Ho, Joshua
    Wang, Chien-Min
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2021, : 143 - 148
  • [39] Multi-Agent System for Emulating Personality Traits Using Deep Reinforcement Learning
    Liapis, Georgios
    Vlahavas, Ioannis
    APPLIED SCIENCES-BASEL, 2024, 14 (24):
  • [40] Urban Traffic Control Using Distributed Multi-agent Deep Reinforcement Learning
    Kitagawa, Shunya
    Moustafa, Ahmed
    Ito, Takayuki
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2019, 11672 : 337 - 349