Context-Aware Faster RCNN for CSI-Based Human Action Perception

被引:1
|
作者
Sheng, Biyun
Xiao, Fu [1 ]
Gui, Linqing
Guo, Zhengxin
机构
[1] Nanjing Univ Posts & Telecommun, Sch Comp, Nanjing, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Action perception; channel state information (CSI); context information; device-free sensing; faster RCNN;
D O I
10.1109/THMS.2022.3225828
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the widespread deployment of commercial wireless devices, researchers begin to focus on device-free sensing tasks. In the field of action perception, existing WiFi-based sensing works mostly follow the framework in which action instances of channel state information (CSI) are first extracted and then classified. As for the part of human action detection, a majority of works adopt threshold based sliding window or frame-by-frame detection methods. However, it is hard for the former approach to set a reasonable threshold for all samples. As for the latter, it costs a relatively substantial amount of labor to label each moment of the time sequences. In order to overcome the above problems, we design an end-to-end context-aware faster region-based convolutional neural networks (RCNN) framework named Wisense to simultaneously detect the temporal boundaries as well as classify the actions. More specifically, Wisense consists of backbone net, region proposal net (RPN), pooling layer, and the prediction net, which directly regresses the action location along the time axis and classifies the action types. For the sake of wireless signal temporal detection, we transform the input into 1-D feature map and extract multiscale 1-D anchors. Besides, in order to sufficiently mine the context information, we extend the boundaries of region proposals and further establish the temporal pyramid features. Experimental results conducted in three indoor scenes validate the effectiveness of our proposed Wisense.
引用
收藏
页码:438 / 448
页数:11
相关论文
共 50 条
  • [31] APFNet: Amplitude-Phase Fusion Network for CSI-Based Action Recognition
    Duan, Pengsong
    Li, Hao
    Zhang, Bo
    Cao, Yangjie
    Wang, Endong
    MOBILE NETWORKS & APPLICATIONS, 2021, 26 (05): : 2024 - 2034
  • [32] EDOT: Context-aware Tracking of Similar Data Patterns of Patients for Faster Diagnoses
    Dutta, Sushama Rani
    Roy, Monideepa
    PROCEEDINGS OF THE 2017 IEEE SECOND INTERNATIONAL CONFERENCE ON ELECTRICAL, COMPUTER AND COMMUNICATION TECHNOLOGIES (ICECCT), 2017,
  • [33] CSI-based human sensing using model-based approaches: a survey
    Wang, Zhengjie
    Huang, Zehua
    Zhang, Chengming
    Dou, Wenwen
    Guo, Yinjing
    Chen, Da
    JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING, 2021, 8 (02) : 510 - 523
  • [34] Scene context-aware graph convolutional network for skeleton-based action recognition
    Zhang, Wenxian
    IET COMPUTER VISION, 2024, 18 (03) : 343 - 354
  • [35] Context-Aware Based API Recommendation with Diversity
    Lai B.
    Li Z.
    Zhao R.
    Guo J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2023, 60 (10): : 2335 - 2347
  • [36] A Context-Aware Model Based on Mobile Agent
    Wang, Ying
    Peng, Xinguang
    EMERGING RESEARCH IN ARTIFICIAL INTELLIGENCE AND COMPUTATIONAL INTELLIGENCE, 2011, 237 : 253 - 260
  • [37] An ontology-based model for context-aware
    Yan, Zhongmin
    Li, Qingzhong
    Li, Hui
    2006 1ST INTERNATIONAL SYMPOSIUM ON PERVASIVE COMPUTING AND APPLICATIONS, PROCEEDINGS, 2006, : 647 - +
  • [38] Play and rewind: Context-aware video temporal action proposals
    Gao, Lianli
    Li, Tao
    Song, Jingkuan
    Zhao, Zhou
    Shen, Heng Tao
    PATTERN RECOGNITION, 2020, 107 (107)
  • [39] Studying on context-aware model based on TAP
    Pang, Meiyui
    Jin, Jianjiao
    2007 International Symposium on Computer Science & Technology, Proceedings, 2007, : 184 - 186
  • [40] Enhancing CSI-Based Human Activity Recognition by Edge Detection Techniques
    Shahverdi, Hossein
    Nabati, Mohammad
    Moshiri, Parisa Fard
    Asvadi, Reza
    Ghorashi, Seyed Ali
    INFORMATION, 2023, 14 (07)