Optimal Sensor Placement and Multimodal Fusion for Human Activity Recognition in Agricultural Tasks

被引:2
|
作者
Benos, Lefteris [1 ]
Tsaopoulos, Dimitrios [1 ]
Tagarakis, Aristotelis C. [1 ]
Kateris, Dimitrios [1 ]
Bochtis, Dionysis [1 ,2 ]
机构
[1] Ctr Res & Technol Hellas CERTH, Inst Bioecon & Agritechnol IBO, GR-57001 Thessaloniki, Greece
[2] FarmB Digital Agr, Doiraniis 17, Thessaloniki GR-54639, Greece
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
关键词
Long Short-Term Memory (LSTM) networks; wearable sensors; multi-sensor information fusion; human-robot collaboration; human factors; cost-optimal system configuration; ROBOTS; SPINE;
D O I
10.3390/app14188520
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This study examines the impact of sensor placement and multimodal sensor fusion on the performance of a Long Short-Term Memory (LSTM)-based model for human activity classification taking place in an agricultural harvesting scenario involving human-robot collaboration. Data were collected from twenty participants performing six distinct activities using five wearable inertial measurement units placed at various anatomical locations. The signals collected from the sensors were first processed to eliminate noise and then input into an LSTM neural network for recognizing features in sequential time-dependent data. Results indicated that the chest-mounted sensor provided the highest F1-score of 0.939, representing superior performance over other placements and combinations of them. Moreover, the magnetometer surpassed the accelerometer and gyroscope, highlighting its superior ability to capture crucial orientation and motion data related to the investigated activities. However, multimodal fusion of accelerometer, gyroscope, and magnetometer data showed the benefit of integrating data from different sensor types to improve classification accuracy. The study emphasizes the effectiveness of strategic sensor placement and fusion in optimizing human activity recognition, thus minimizing data requirements and computational expenses, and resulting in a cost-optimal system configuration. Overall, this research contributes to the development of more intelligent, safe, cost-effective adaptive synergistic systems that can be integrated into a variety of applications.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Data Fusion for Human Activity Recognition Based on RF Sensing and IMU Sensor
    Yu, Zheqi
    Zahid, Adnan
    Taylor, William
    Abbas, Hasan
    Heidari, Hadi
    Imran, Muhammad A.
    Abbasi, Qammer H.
    BODY AREA NETWORKS: SMART IOT AND BIG DATA FOR INTELLIGENT HEALTH MANAGEMENT, 2022, 420 : 3 - 14
  • [32] Adaptive multiple classifiers fusion for inertial sensor based human activity recognition
    Yiming Tian
    Xitai Wang
    Wei Chen
    Zuojun Liu
    Lifeng Li
    Cluster Computing, 2019, 22 : 8141 - 8154
  • [33] The Effect of Sensor Placement in A Cooking Activity Recognition System
    Moghaddam, Majid Ghosian
    Shirehjini, Ali Asghar Nazari
    Shirmohammadi, Shervin
    2024 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC 2024, 2024,
  • [34] Optimal placement of IMU sensor for the detection of children activity
    Madej, Magdalena
    Ruminski, Jacek
    2022 15TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI), 2022,
  • [35] A Multimodal Sensor Fusion Framework Robust to Missing Modalities for Person Recognition
    John, Vijay
    Kawanishi, Yasutomo
    PROCEEDINGS OF THE 4TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA IN ASIA, MMASIA 2022, 2022,
  • [36] Multi-level feature fusion for multimodal human activity recognition in Internet of Healthcare Things
    Islam, Md. Milon
    Nooruddin, Sheikh
    Karray, Fakhri
    Muhammad, Ghulam
    INFORMATION FUSION, 2023, 94 : 17 - 31
  • [37] A reliability guided sensor fusion model for optimal weighting in multimodal systems
    Makkook, Mustapha
    Basir, Otman
    Karray, Fakhreddine
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 2453 - 2456
  • [38] Poster: Multimodal ConvTransformer For Human Activity Recognition
    Haque, Syed Tousiful
    Ngu, Anne H. H.
    2024 IEEE/ACM CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES, CHASE 2024, 2024, : 206 - 207
  • [39] Multimodal Workplace Monitoring for Human Activity Recognition
    Mitsou, Alexandros
    Spyrou, Evaggelos
    Giannakopoulos, Theodoros
    25TH PAN-HELLENIC CONFERENCE ON INFORMATICS WITH INTERNATIONAL PARTICIPATION (PCI2021), 2021, : 206 - 211
  • [40] Confidence-based Deep Multimodal Fusion for Activity Recognition
    Choi, Jun-Ho
    Lee, Jong-Seok
    PROCEEDINGS OF THE 2018 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2018 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (UBICOMP/ISWC'18 ADJUNCT), 2018, : 1548 - 1556