End2end vehicle multitask perception in adverse weather

被引:0
|
作者
Dai, Yifan [1 ]
Wang, Qiang [1 ]
机构
[1] China FAW Grp Co Ltd, Changchun, Peoples R China
关键词
Multitask perception; Supervised learning; Unsupervised domain adaptation; Object detection; Lane detection; Drivable area detection; TRACKING;
D O I
10.1016/j.robot.2025.104945
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the research of autonomous driving technology, due to the lack of datasets for various extreme weather conditions, autonomous driving perception in adverse weather is a challenge. To address this problem, this paper introduces an end-to-end multi-task perception system that combines labeled supervised learning and unsupervised domain adaptive learning for bad weather. The key innovations of this system include: a multitask learning framework that simultaneously handles object detection, lane line detection, and drivable area detection, improving both efficiency and cost-effectiveness for autonomous driving in complex environments; a domain adaptation strategy using unlabeled data for adverse weather, which enables the system to perform robustly without requiring specific labels for harsh weather conditions; the system has strong generalization ability, demonstrated by achieving an prediction mAP of 83.86%, a drivable area mIoU of 91.59%, and lane detection accuracy of 83.9% on the BDD100K dataset, as well as an mAP of 74.85% on the Cityscapes fog dataset without additional training, highlighting its effectiveness in unseen, adverse conditions. The scalable and generalized solution provided in this paper can achieve high-performance navigation in various extreme environments. By combining supervised and unsupervised learning techniques, this model can not only cope with severe weather but also further generalize to unseen scenarios.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] END2END ACOUSTIC TO SEMANTIC TRANSDUCTION
    Pelloin, Valentin
    Camelin, Nathalie
    Laurent, Antoine
    De Mori, Renato
    Caubriere, Antoine
    Esteve, Yannick
    Meignier, Sylvain
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7448 - 7452
  • [2] Deep End2End Voxel2Voxel Prediction
    Tran, Du
    Bourdev, Lubomir
    Fergus, Rob
    Torresani, Lorenzo
    Paluri, Manohar
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 402 - 409
  • [3] OneTrack-An End2End approach to enhance MOT with Transformers
    Araujo, Luiz
    Figueiredo, Carlos
    Journal of Internet Services and Applications, 2024, 15 (01) : 302 - 312
  • [4] End2End Occluded Face Recognition by Masking Corrupted Features
    Qiu, Haibo
    Gong, Dihong
    Li, Zhifeng
    Liu, Wei
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6939 - 6952
  • [5] Flexible End2End Workflow Automation of Hit-Discovery Research
    Holzmueller-Laue, Silke
    Goede, Bernd
    Thurow, Kerstin
    JALA, 2014, 19 (04): : 349 - 361
  • [6] End2End Semantic Segmentation for 3D Indoor Scenes
    Zhao, Na
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 810 - 814
  • [7] TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis
    Wang, Zilong
    Wan, Zhaohong
    Wan, Xiaojun
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 2514 - 2520
  • [8] End2End Multi-View Feature Matching with Differentiable Pose Optimization
    Roessle, Barbara
    Niessner, Matthias
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 477 - 487
  • [9] DeFraudNet:End2End Fingerprint Spoof Detection using Patch Level Attention
    Anusha, B. V. S.
    Banerjee, Sayan
    Chaudhuri, Subhasis
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2684 - 2693
  • [10] Wav2vec behind the Scenes: How end2end Models learn Phonetics
    Dieck, Teena Tom
    Perez-Toro, Paula Andrea
    Arias, Tomas
    Noeth, Elmar
    Klumpp, Philipp
    INTERSPEECH 2022, 2022, : 5130 - 5134