Motion Inference Using Sparse Inertial Sensors, Self-Supervised Learning, and a New Dataset of Unscripted Human Motion

被引:14
|
作者
Geissinger, Jack H. [1 ]
Asbeck, Alan T. [2 ]
机构
[1] Virginia Tech, Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
[2] Virginia Tech, Dept Mech Engn, Blacksburg, VA 24061 USA
关键词
motion dataset; kinematics; inertial sensors; self-supervised learning; sparse sensors; POSE ESTIMATION; CAPTURE;
D O I
10.3390/s20216330
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In recent years, wearable sensors have become common, with possible applications in biomechanical monitoring, sports and fitness training, rehabilitation, assistive devices, or human-computer interaction. Our goal was to achieve accurate kinematics estimates using a small number of sensors. To accomplish this, we introduced a new dataset (the Virginia Tech Natural Motion Dataset) of full-body human motion capture using XSens MVN Link that contains more than 40 h of unscripted daily life motion in the open world. Using this dataset, we conducted self-supervised machine learning to do kinematics inference: we predicted the complete kinematics of the upper body or full body using a reduced set of sensors (3 or 4 for the upper body, 5 or 6 for the full body). We used several sequence-to-sequence (Seq2Seq) and Transformer models for motion inference. We compared the results using four different machine learning models and four different configurations of sensor placements. Our models produced mean angular errors of 10-15 degrees for both the upper body and full body, as well as worst-case errors of less than 30 degrees. The dataset and our machine learning code are freely available.
引用
收藏
页码:1 / 30
页数:30
相关论文
共 50 条
  • [21] Enhancing motion visual cues for self-supervised video representation learning
    Nie, Mu
    Quan, Zhibin
    Ding, Weiping
    Yang, Wankou
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [22] GLOCAL: A self-supervised learning framework for global and local motion estimation
    Zheng, Yihao
    Luo, Kunming
    Liu, Shuaicheng
    Li, Zun
    Xiang, Ye
    Wu, Lifang
    Zeng, Bing
    Chen, Chang Wen
    PATTERN RECOGNITION LETTERS, 2024, 178 : 91 - 97
  • [23] SelfSphNet: Motion Estimation of a Spherical Camera via Self-Supervised Learning
    Kim, Dabae
    Pathak, Sarthak
    Moro, Alessandro
    Yamashita, Atsushi
    Asama, Hajime
    IEEE ACCESS, 2020, 8 (08): : 41847 - 41859
  • [24] Self-Supervised Video GANs: Learning for Appearance Consistency and Motion Coherency
    Hyun, Sangeek
    Kim, Jihwan
    Heo, Jae-Pil
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10821 - 10830
  • [25] Self-supervised Learning of Depth and Camera Motion from 360° Videos
    Wang, Fu-En
    Hu, Hou-Ning
    Cheng, Hsien-Tzu
    Lin, Juan-Ting
    Yang, Shang-Ta
    Shih, Meng-Li
    Chu, Hung-Kuo
    Sun, Min
    COMPUTER VISION - ACCV 2018, PT V, 2019, 11365 : 53 - 68
  • [26] Self-Supervised Facial Motion Representation Learning via Contrastive Subclips
    Sun, Zheng
    Torrie, Shad A.
    Sumsion, Andrew W.
    Lee, Dah-Jye
    ELECTRONICS, 2023, 12 (06)
  • [27] SelfME: Self-Supervised Motion Learning for Micro-Expression Recognition
    Fan, Xinqi
    Chen, Xueli
    Jiang, Mingjie
    Shahid, Ali Raza
    Yan, Hong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13834 - 13843
  • [28] Self-Supervised Attention Learning for Depth and Ego-motion Estimation
    Sadek, Assent
    Chidlovskii, Boris
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 10054 - 10060
  • [29] Motion Guided Attention Learning for Self-Supervised 3D Human Action Recognition
    Yang, Yang
    Liu, Guangjun
    Gao, Xuehao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) : 8623 - 8634
  • [30] Imitation Learning for Nonprehensile Manipulation Through Self-Supervised Learning Considering Motion Speed
    Saigusa, Yuki
    Sakaino, Sho
    Tsuji, Toshiaki
    IEEE ACCESS, 2022, 10 : 68291 - 68306