Causal Robust Trajectory Prediction Against Adversarial Attacks for Autonomous Vehicles

被引:0
|
作者
Duan, Ang [1 ,2 ]
Wang, Ruyan [3 ,4 ,5 ]
Cui, Yaping [3 ,4 ,5 ]
He, Peng [3 ,4 ,5 ]
Chen, Luo [3 ,4 ,5 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
[2] Chongqing Univ Educ, Sch Artificial Intelligence, Chongqing 400065, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Adv Network & Intelligent Connect Technol Key Lab, Chongqing Educ Commiss China, Chongqing 400065, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Lab Chongqing Educ Commiss China, Chongqing 400065, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Ubiquitous Sensing & Networking, Chongqing 400065, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Trajectory; Predictive models; Training; History; Perturbation methods; Autonomous vehicles; Measurement; Adversarial attack; adversarial robustness; causal inference; vehicle trajectory prediction;
D O I
10.1109/JIOT.2023.3342788
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous vehicles may mistakenly predict the future trajectories of neighboring vehicles when the trajectory prediction model is under attack. Recent works utilize adversarial training to mitigate the prediction errors of the trajectory prediction model under attacks. However, adversarial training exhibits high training costs and poor generality for different attack methods. Meanwhile, adversarial training improves the trajectory prediction performance under attacks by learning the adversarial examples, which leads to greater performance degradation in normal (without attacks) cases. In this article, to ensure the driving safety of autonomous vehicles, we propose a causal robust trajectory prediction method named CausalRobTra, which employs total direct effect (TDE) inference to defend trajectory predictors against adversarial attacks from the perspective of causal inference theory. First, we propose four directional metrics to evaluate the prediction errors of the trajectory prediction model under attacks. Then, we construct the causal graph of trajectory prediction under attacks and analyze the causalities among the nodes. Next, we conduct the counterfactual intervention on the history trajectory by replacing the history trajectory with the counterfactual trajectory to cut off the link between the history trajectory and the adversarial perturbation. Finally, we calculate TDE by subtracting the counterfactual prediction from the factual prediction to eliminate the impact of adversarial perturbation on the final prediction. Compared with the no-defense case, our method improves the performance by 13.4% under attacks and at the cost of 7.7% performance degradation on clean data. In addition, our method improves the performance by 20.6% on clean data compared with adversarial training and has a similar performance to adversarial training under attacks. Such an improvement can ensure the safety of autonomous vehicles under attacks and avoid many traffic accidents. Our CausalRobTra is a plug-and-play defense method that can be easily applied to any other trajectory prediction model. Extensive experiments demonstrate that our method effectively improves the adversarial robustness of the trajectory prediction model under attacks at the expense of lower performance degradation in normal (without attacks) cases.
引用
收藏
页码:35762 / 35776
页数:15
相关论文
共 50 条
  • [21] Robust Deep Object Tracking against Adversarial Attacks
    Jia, Shuai
    Ma, Chao
    Song, Yibing
    Yang, Xiaokang
    Yang, Ming-Hsuan
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1238 - 1257
  • [22] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [23] Robust Meta Network Embedding against Adversarial Attacks
    Zhou, Yang
    Ren, Jiaxiang
    Dou, Dejing
    Jin, Ruoming
    Zheng, Jingyi
    Lee, Kisung
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1448 - 1453
  • [24] Targeted Adversarial Attacks against Neural Network Trajectory Predictors
    Tan, Kaiyuan
    Wang, Jun
    Kantaros, Yannis
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [25] WAKE: Towards Robust and Physically Feasible Trajectory Prediction for Autonomous Vehicles With WAvelet and KinEmatics Synergy
    Wang, Chengyue
    Liao, Haicheng
    Li, Zhenning
    Xu, Chengzhong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 3126 - 3140
  • [26] Is Semantic Communication for Autonomous Driving Secured against Adversarial Attacks?
    Ribouh, Soheyb
    Hadid, Abdenour
    2024 IEEE 6TH INTERNATIONAL CONFERENCE ON AI CIRCUITS AND SYSTEMS, AICAS 2024, 2024, : 139 - 143
  • [27] Trajectory Prediction with Correction Mechanism for Connected and Autonomous Vehicles
    Lv, Pin
    Liu, Hongbiao
    Xu, Jia
    Li, Taoshen
    ELECTRONICS, 2022, 11 (14)
  • [28] On the Robustness of Intrusion Detection Systems for Vehicles Against Adversarial Attacks
    Choi, Jeongseok
    Kim, Hyoungshick
    INFORMATION SECURITY APPLICATIONS, 2021, 13009 : 39 - 50
  • [29] Covert Attacks Through Adversarial Learning: Study of Lane Keeping Attacks on the Safety of Autonomous Vehicles
    Farivar, Faezeh
    Haghighi, Mohammad Sayad
    Jolfaei, Alireza
    Wen, Sheng
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2021, 26 (03) : 1350 - 1357
  • [30] Robust Trajectory Tracking Control for Underactuated Autonomous Underwater Vehicles
    Heshmati-Alamdari, Shahab
    Nikou, Alexandros
    Dimarogonas, Dimos V.
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 8311 - 8316