Multi-Frequency RF Sensor Data Adaptation for Motion Recognition with Multi-Modal Deep Learning

被引:8
|
作者
Rahman, M. Mahbubur [1 ]
Gurbuz, Sevgi Z. [1 ]
机构
[1] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL 35487 USA
基金
美国国家科学基金会;
关键词
micro-Doppler; radar; multi-modal learning; adversarial neural networks; CLASSIFICATION;
D O I
10.1109/RadarConf2147009.2021.9455204
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The widespread availability of low-cost RF sensors has made it easier to construct RF sensor networks for motion recognition, as well as increased the availability of RF data across a variety of frequencies, waveforms, and transmit parameters. However, it is not effective to directly use disparate RF sensor data for the training of deep neural networks, as the phenomenological differences in the data result in significant performance degradation. In this paper, we consider two approaches for the exploitation of multi-frequency RF data: 1) a single sensor case, where adversarial domain adaptation is used to transform the data from one RF sensor to resemble that of another, and 2) a multi-sensor case, where a multi-modal neural network is designed for joint target recognition using measurements from all sensors. Our results show that the developed approaches offer effective techniques for leveraging multi-frequency RF sensor data for target recognition.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Electromagnetic signal feature fusion and recognition based on multi-modal deep learning
    Hou C.
    Zhang X.
    Chen X.
    International Journal of Performability Engineering, 2020, 16 (06): : 941 - 949
  • [32] Human Behavior Recognition Algorithm Based on Multi-Modal Sensor Data Fusion
    Zheng, Dingchao
    Chen, Caiwei
    Yu, Jianzhe
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2025, 29 (02) : 287 - 305
  • [33] Multi-Modal Deep Learning-Based Violin Bowing Action Recognition
    Liu, Bao-Yun
    Jen, Yi-Hsin
    Sun, Shih-Wei
    Su, Li
    Chang, Pao-Chi
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [34] Robotic grasping recognition using multi-modal deep extreme learning machine
    Wei, Jie
    Liu, Huaping
    Yan, Gaowei
    Sun, Fuchun
    MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, 2017, 28 (03) : 817 - 833
  • [35] Robotic grasping recognition using multi-modal deep extreme learning machine
    Jie Wei
    Huaping Liu
    Gaowei Yan
    Fuchun Sun
    Multidimensional Systems and Signal Processing, 2017, 28 : 817 - 833
  • [36] Deep Learning Based Multi-modal Addressee Recognition in Visual Scenes with Utterances
    Thao Le Minh
    Shimizu, Nobuyuki
    Miyazaki, Takashi
    Shinoda, Koichi
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1546 - 1553
  • [37] Multi-Modal Emotion Recognition Based On deep Learning Of EEG And Audio Signals
    Li, Zhongjie
    Zhang, Gaoyan
    Dang, Jianwu
    Wang, Longbiao
    Wei, Jianguo
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [38] Multi-Modal ISAR Object Recognition using Adaptive Deep Relation Learning
    Xue, Bin
    Tong, Ningning
    2019 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET 2019): ADVANCING WIRELESS AND MOBILE COMMUNICATIONS TECHNOLOGIES FOR 2020 INFORMATION SOCIETY, 2019, : 48 - 53
  • [39] ASL Recognition Based on Kinematics Derived from a Multi-Frequency RF Sensor Network
    Gurbuz, Sevgi Z.
    Gurbuz, Ali C.
    Malaia, Evie A.
    Griffin, Darrin J.
    Crawford, Chris
    Kurtoglu, Emre
    Rahman, M. Mahbubur
    Aksu, Ridvan
    Mdrafi, Robiulhossain
    2020 IEEE SENSORS, 2020,
  • [40] Multi-Frequency RF Sensor Fusion for Word-Level Fluent ASL Recognition
    Gurbuz, Sevgi Z.
    Rahman, M. Mahbubur
    Kurtoglu, Emre
    Malaia, Evie
    Gurbuz, Ali Cafer
    Griffin, Darrin J.
    Crawford, Chris
    IEEE SENSORS JOURNAL, 2022, 22 (12) : 11373 - 11381