Optimal Sensor Placement and Multimodal Fusion for Human Activity Recognition in Agricultural Tasks

被引:2
|
作者
Benos, Lefteris [1 ]
Tsaopoulos, Dimitrios [1 ]
Tagarakis, Aristotelis C. [1 ]
Kateris, Dimitrios [1 ]
Bochtis, Dionysis [1 ,2 ]
机构
[1] Ctr Res & Technol Hellas CERTH, Inst Bioecon & Agritechnol IBO, GR-57001 Thessaloniki, Greece
[2] FarmB Digital Agr, Doiraniis 17, Thessaloniki GR-54639, Greece
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
Long Short-Term Memory (LSTM) networks; wearable sensors; multi-sensor information fusion; human-robot collaboration; human factors; cost-optimal system configuration; ROBOTS; SPINE;
D O I
10.3390/app14188520
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This study examines the impact of sensor placement and multimodal sensor fusion on the performance of a Long Short-Term Memory (LSTM)-based model for human activity classification taking place in an agricultural harvesting scenario involving human-robot collaboration. Data were collected from twenty participants performing six distinct activities using five wearable inertial measurement units placed at various anatomical locations. The signals collected from the sensors were first processed to eliminate noise and then input into an LSTM neural network for recognizing features in sequential time-dependent data. Results indicated that the chest-mounted sensor provided the highest F1-score of 0.939, representing superior performance over other placements and combinations of them. Moreover, the magnetometer surpassed the accelerometer and gyroscope, highlighting its superior ability to capture crucial orientation and motion data related to the investigated activities. However, multimodal fusion of accelerometer, gyroscope, and magnetometer data showed the benefit of integrating data from different sensor types to improve classification accuracy. The study emphasizes the effectiveness of strategic sensor placement and fusion in optimizing human activity recognition, thus minimizing data requirements and computational expenses, and resulting in a cost-optimal system configuration. Overall, this research contributes to the development of more intelligent, safe, cost-effective adaptive synergistic systems that can be integrated into a variety of applications.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Multimodal Recognition of Personality Traits in Human-Computer Collaborative Tasks
    Batrinca, Ligia Maria
    Lepri, Bruno
    Mana, Nadia
    Pianesi, Fabio
    ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 39 - 46
  • [42] Realistic human action recognition with multimodal feature selection and fusion
    Wu, Qiuxia
    Wang, Zhiyong
    Deng, Feiqi
    Chi, Zheru
    Feng, David Dagan
    IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2013, 43 (04) : 875 - 885
  • [43] Multimodal Fusion for Human Action Recognition via Spatial Transformer
    Sun, Yaohui
    Xu, Weiyao
    Gao, Ju
    Yu, Xiaoyi
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 1638 - 1641
  • [44] Realistic Human Action Recognition With Multimodal Feature Selection and Fusion
    Wu, Qiuxia
    Wang, Zhiyong
    Deng, Feiqi
    Chi, Zheru
    Feng, David Dagan
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2013, 43 (04): : 875 - 885
  • [45] Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things
    Issa, Mohamed E.
    Helmi, Ahmed M.
    Al-Qaness, Mohammed A. A.
    Dahou, Abdelghani
    Abd Elaziz, Mohamed
    Damasevicius, Robertas
    HEALTHCARE, 2022, 10 (06)
  • [46] On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition
    Banos, Oresti
    Damas, Miguel
    Pomares, Hector
    Rojas, Ignacio
    SENSORS, 2012, 12 (06) : 8039 - 8054
  • [47] Interpretable Passive Multi-Modal Sensor Fusion for Human Identification and Activity Recognition
    Yuan, Liangqi
    Andrews, Jack
    Mu, Huaizheng
    Vakil, Asad
    Ewing, Robert
    Blasch, Erik
    Li, Jia
    SENSORS, 2022, 22 (15)
  • [48] Efficient Human Gait Activity Recognition Based on Sensor Fusion and Intelligent Stacking Framework
    Tarekegn, Adane Nega
    Sajjad, Muhammad
    Cheikh, Faouzi Alaya
    Ullah, Mohib
    Muhammad, Khan
    IEEE SENSORS JOURNAL, 2023, 23 (22) : 28355 - 28369
  • [49] ENHANCING HUMAN ACTIVITY RECOGNITION THROUGH SENSOR FUSION AND HYBRID DEEP LEARNING MODEL
    Tarekegn, Adane Nega
    Ullah, Mohib
    Cheikh, Faouzi Alaya
    Sajjad, Muhammad
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [50] Classification Model for Multi-Sensor Data Fusion Apply for Human Activity Recognition
    Arnon, Paranyu
    2014 INTERNATIONAL CONFERENCE ON COMPUTER, COMMUNICATIONS, AND CONTROL TECHNOLOGY (I4CT), 2014, : 415 - 419