Radar Signal Abnormal Point Classification based on Camera-Radar Sensor Fusion

被引:1
|
作者
Seo, Hyojeong [1 ]
Han, Dong Seog [2 ]
机构
[1] Kyungpook Natl Univ, Sch Elect & Elect Engn, Daegu, South Korea
[2] Kyungpook Natl Univ, Sch Elect Engn, Daegu, South Korea
关键词
Radar; RCS; deep learning; classification; sensor fusion;
D O I
10.1109/ICAIIC57133.2023.10067112
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For safe driving, it is essential to accept reliable information from recognition sensors. In this paper, we present a deep learning model that classifies whether radar signals coming in are normal or abnormal. The abnormal signal is defined as noise from the radar and all signals received when the radar fails or is in trouble. It is difficult to determine whether reflected signals are normal or not based only on radar data. Therefore, the camera and radar sensors are used together, considering the radar cross section (RCS) distribution varies by the angle and distance of the object. The proposed model uses data received from camera and radar sensors to determine the normality of object signals. The model shows an accuracy of 96.24%. Through the results of this study, the reliability of radar signals can be determined in the actual driving environment, thereby ensuring the safety of vehicles and pedestrians.
引用
收藏
页码:590 / 594
页数:5
相关论文
共 50 条
  • [21] Radar-camera Fusion for Road Target Classification
    Aziz, Kheireddine
    De Greef, Eddy
    Rykunov, Maxim
    Bourdoux, Andre
    Sahli, Hichem
    2020 IEEE RADAR CONFERENCE (RADARCONF20), 2020,
  • [22] A Deep Learning Approach for Drone Detection and Classification using Radar and Camera Sensor Fusion
    Mehta, Varun
    Dadboud, Fardad
    Bolic, Miodrag
    Mantegh, Iraj
    2023 IEEE SENSORS APPLICATIONS SYMPOSIUM, SAS, 2023,
  • [23] Deep Camera-Radar Fusion with an Attention Framework for Autonomous Vehicle Vision in Foggy Weather Conditions
    Ogunrinde, Isaac
    Bernadin, Shonda
    SENSORS, 2023, 23 (14)
  • [24] Robust Gait Recognition Based on Deep CNNs With Camera and Radar Sensor Fusion
    Shi, Yu
    Du, Lan
    Chen, Xiaoyang
    Liao, Xun
    Yu, Zengyu
    Li, Zenghui
    Wang, Chunxin
    Xue, Shikun
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (12) : 10817 - 10832
  • [25] Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic Annotator via Coordinate Alignment
    Wang, Yizhou
    Wang, Gaoang
    Hsu, Hung-Min
    Liu, Hui
    Hwang, Jenq-Neng
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2809 - 2818
  • [26] NeXtFusion: Attention-Based Camera-Radar Fusion Network for Improved Three-Dimensional Object Detection and Tracking
    Kalgaonkar, Priyank
    El-Sharkawy, Mohamed
    FUTURE INTERNET, 2024, 16 (04)
  • [27] CR-DINO: A Novel Camera-Radar Fusion 2-D Object Detection Model Based on Transformer
    Jin, Yuhao
    Zhu, Xiaohui
    Yue, Yong
    Lim, Eng Gee
    Wang, Wei
    IEEE SENSORS JOURNAL, 2024, 24 (07) : 11080 - 11090
  • [28] CRAFT: Camera-Radar 3D Object Detection with Spatio-Contextual Fusion Transformer
    Kim, Youngseok
    Kim, Sanmin
    Choi, Jun Won
    Kum, Dongsuk
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 1160 - 1168
  • [29] YOdar: Uncertainty-based Sensor Fusion for Vehicle Detection with Camera and Radar Sensors
    Kowol, Kamil
    Rottmann, Matthias
    Bracke, Stefan
    Gottschalk, Hanno
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 177 - 186
  • [30] A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection
    Nobis, Felix
    Geisslinger, Maximilian
    Weber, Markus
    Betz, Johannes
    Lienkamp, Markus
    2019 SYMPOSIUM ON SENSOR DATA FUSION: TRENDS, SOLUTIONS, APPLICATIONS (SDF 2019), 2019,