Fusing Object Information and Inertial Data for Activity Recognition

被引:5
|
作者
Diete, Alexander [1 ]
Stuckenschmidt, Heiner [1 ]
机构
[1] Univ Mannheim, Data & Web Sci Grp, D-68159 Mannheim, Germany
关键词
activity recognition; machine learning; multi-modality; VISION; PREVENTION;
D O I
10.3390/s19194119
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users' arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F-1-measure of up to 79.6%.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Fusing the facial temporal information in videos for face recognition
    Selvam, Ithayarani Panner
    Karruppiah, Muneeswaran
    IET COMPUTER VISION, 2016, 10 (07) : 650 - 659
  • [22] Fusing Attention Features and Contextual Information for Scene Recognition
    Peng, Yuqing
    Liu, Xianzi
    Wang, Chenxi
    Xiao, Tengfei
    Li, Tiejun
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (03)
  • [23] Fusing depth and colour information for human action recognition
    Avola, Danilo
    Bernardi, Marco
    Foresti, Gian Luca
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (05) : 5919 - 5939
  • [24] Fusing depth and colour information for human action recognition
    Danilo Avola
    Marco Bernardi
    Gian Luca Foresti
    Multimedia Tools and Applications, 2019, 78 : 5919 - 5939
  • [25] Multi-information Fusing Based Railroad Object Detection
    Xie, Chengli
    Wang, Jinqiao
    Lu, Hanqing
    PROCEEDINGS OF THE 2009 CHINESE CONFERENCE ON PATTERN RECOGNITION AND THE FIRST CJK JOINT WORKSHOP ON PATTERN RECOGNITION, VOLS 1 AND 2, 2009, : 766 - 770
  • [26] Multi-Modal Human Action Recognition Using Deep Neural Networks Fusing Image and Inertial Sensor Data
    Hwang, Inhwan
    Cha, Geonho
    Oh, Songhwai
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2017, : 278 - 283
  • [27] Information measures for object recognition
    Cooper, M
    Miller, M
    ALGORITHMS FOR SYNTHETIC APERTURE RADAR IMAGERY V, 1998, 3370 : 637 - 645
  • [28] MR Object Identification and Interaction: Fusing Object Situation Information from Heterogeneous Sources
    Strecker, Jannis
    Akhunov, Khakim
    Carbone, Federico
    Garcia, Kimberly
    Bektas, Kenan
    Gomez, Andres
    Mayer, Simon
    Yildirim, Kasim Sinan
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2023, 7 (03):
  • [29] The VISTA datasets, a combination of inertial sensors and depth cameras data for activity recognition
    Laura Fiorini
    Federica Gabriella Cornacchia Loizzo
    Alessandra Sorrentino
    Erika Rovini
    Alessandro Di Nuovo
    Filippo Cavallo
    Scientific Data, 9
  • [30] Deformable Structure From Motion by Fusing Visual and Inertial Measurement Data
    Giannarou, Stamatia
    Zhang, Zhiqiang
    Yang, Guang-Zhong
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 4816 - 4821