Recognizing Personal Locations From Egocentric Videos

被引:28
|
作者
Furnari, Antonino [1 ]
Farinella, Giovanni Maria [1 ]
Battiato, Sebastiano [1 ]
机构
[1] Univ Catania, Dept Math & Comp Sci, I-95124 Catania, Italy
关键词
Context-aware computing; egocentric dataset; egocentric vision; first person vision; personal location recognition; CONTEXT; CLASSIFICATION; RECOGNITION; SCENE; SHAPE;
D O I
10.1109/THMS.2016.2612002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contextual awareness in wearable computing allows for construction of intelligent systems, which are able to interact with the user in a more natural way. In this paper, we study how personal locations arising from the user's daily activities can be recognized from egocentric videos. We assume that few training samples are available for learning purposes. Considering the diversity of the devices available on the market, we introduce a benchmark dataset containing egocentric videos of eight personal locations acquired by a user with four different wearable cameras. To make our analysis useful in real-world scenarios, we propose a method to reject negative locations, i.e., those not belonging to any of the categories of interest for the end-user. We assess the performances of the main state-of-the-art representations for scene and object classification on the considered task, as well as the influence of device-specific factors such as the field of view and the wearing modality. Concerning the different device-specific factors, experiments revealed that the best results are obtained using a head-mounted wide-angular device. Our analysis shows the effectiveness of using representations based on convolutional neural networks, employing basic transfer learning techniques and an entropy-based rejection algorithm.
引用
收藏
页码:6 / 18
页数:13
相关论文
共 50 条
  • [41] Generating Bird's Eye View from Egocentric RGB Videos
    Jain, Vanita
    Wu, Qiming
    Grover, Shivam
    Sidana, Kshitij
    Chaudhary, Gopal
    Myint, San Hlaing
    Hua, Qiaozhi
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [42] PassFrame: Generating Image-based Passwords from Egocentric Videos
    Ngu Nguyen
    Sigg, Stephan
    2017 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS (PERCOM WORKSHOPS), 2017,
  • [43] Tracking Multiple Deformable Objects in Egocentric Videos
    Huang, Mingzhen
    Li, Xiaoxing
    Hu, Jun
    Peng, Honghong
    Lyu, Siwei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 1461 - 1471
  • [44] Organizing egocentric videos of daily living activities
    Ortis, Alessandro
    Farinella, Giovanni M.
    D'Amico, Valeria
    Addesso, Luca
    Torrisi, Giovanni
    Battiato, Sebastiano
    PATTERN RECOGNITION, 2017, 72 : 207 - 218
  • [45] An Unsupervised Method for Summarizing Egocentric Sport Videos
    Habibi Aghdam, Hamed
    Jahani Heravi, Elnaz
    Puig, Domenec
    EIGHTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2015), 2015, 9875
  • [46] Multiscale summarization and action ranking in egocentric videos
    Sahu, Abhimanyu
    Chowdhury, Ananda S.
    PATTERN RECOGNITION LETTERS, 2020, 133 : 256 - 263
  • [47] Anticipating Next Active Objects for Egocentric Videos
    Thakur, Sanket Kumar
    Beyan, Cigdem
    Morerio, Pietro
    Murino, Vittorio
    del Bue, Alessio
    IEEE ACCESS, 2024, 12 : 61767 - 61779
  • [48] Left/right hand segmentation in egocentric videos
    Betancourt, Alejandro
    Morerio, Pietro
    Barakova, Emilia
    Marcenaro, Lucio
    Rauterberg, Matthias
    Regazzoni, Carlo
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 154 : 73 - 81
  • [49] EgoTaskQA: Understanding Human Tasks in Egocentric Videos
    Jia, Baoxiong
    Lei, Ting
    Zhu, Song-Chun
    Huang, Siyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [50] Demo of PassFrame: Generating Image-based Passwords from Egocentric Videos
    Ngu Nguyen
    Sigg, Stephan
    2017 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS (PERCOM WORKSHOPS), 2017,