Pixel and feature level based domain adaptation for object detection in autonomous driving

被引:62
|
作者
Shan, Yuhu [1 ]
Lu, Wen Feng [1 ]
Chew, Chee Meng [1 ]
机构
[1] Natl Univ Singapore, Dept Mech Engn, Singapore 117575, Singapore
基金
新加坡国家研究基金会;
关键词
Autonomous driving; Convolutional neural network; Generative adversarial network; Object detection; Unsupervised domain adaptation;
D O I
10.1016/j.neucom.2019.08.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Annotating large-scale datasets to train modern convolutional neural networks is prohibitively expensive and time-consuming for many real tasks. One alternative is to train the model on labeled synthetic datasets and apply it in the real scenes. However, this straightforward method often fails to generalize well mainly due to the domain bias between the synthetic and real datasets. Many unsupervised domain adaptation (UDA) methods were introduced to address this problem but most of them only focused on the simple classification task. This paper presents a novel UDA model which integrates both image and feature level based adaptations to solve the cross-domain object detection problem. We employ objectives of the generative adversarial network and the cycle consistency loss for image translation. Furthermore, region proposal based feature adversarial training and classification are proposed to further minimize the domain shifts and preserve the semantics of the target objects. Extensive experiments are conducted on several different adaptation scenarios, and the results demonstrate the robustness and superiority of the proposed method. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:31 / 38
页数:8
相关论文
共 50 条
  • [41] Domain adaptation based on feature-level and class-level alignment
    Zhao X.-Q.
    Jiang H.-M.
    Kongzhi yu Juece/Control and Decision, 2022, 37 (05): : 1203 - 1210
  • [42] Feature Map Transformation for Multi-sensor Fusion in Object Detection Networks for Autonomous Driving
    Schroder, Enrico
    Braun, Sascha
    Mahlisch, Mirko
    Vitay, Julien
    Hamker, Fred
    ADVANCES IN COMPUTER VISION, VOL 2, 2020, 944 : 118 - 131
  • [43] A STUDY OF PARKING-SLOT DETECTION WITH THE AID OF PIXEL-LEVEL DOMAIN ADAPTATION
    Chen, Juntao
    Zhang, Lin
    Shen, Ying
    Ma, Yong
    Zhao, Shengjie
    Zhou, Yicong
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [44] Progressive Domain Adaptation for Object Detection
    Hsu, Han-Kai
    Yao, Chun-Han
    Tsai, Yi-Hsuan
    Hung, Wei-Chih
    Tseng, Hung-Yu
    Singh, Maneesh
    Yang, Ming-Hsuan
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 738 - 746
  • [45] Improving GAN-based Domain Adaptation for Object Detection
    Menke, Maximilian
    Wenzel, Thomas
    Schwung, Andreas
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 3880 - 3885
  • [46] An Feature Fusion Object Detector for Autonomous Driving in Mining Area
    Ren, Liangcai
    Yang, Chao
    Song, Ruiqi
    Chen, Shichao
    Ai, Yunfeng
    2021 INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SOCIAL INTELLIGENCE (ICCSI), 2021,
  • [47] Feature Constrained by Pixel: Hierarchical Adversarial Deep Domain Adaptation
    Shao, Rui
    Lan, Xiangyuan
    Yuen, Pong C.
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 220 - 228
  • [48] Image and Feature Space Based Domain Adaptation for Vehicle Detection
    Tian, Ying
    Wang, Libing
    Gu, Hexin
    Fan, Lin
    CMC-COMPUTERS MATERIALS & CONTINUA, 2020, 65 (03): : 2397 - 2412
  • [49] PMPF: Point-Cloud Multiple-Pixel Fusion-Based 3D Object Detection for Autonomous Driving
    Zhang, Yan
    Liu, Kang
    Bao, Hong
    Zheng, Ying
    Yang, Yi
    REMOTE SENSING, 2023, 15 (06)
  • [50] Gotta Adapt 'Em All: Joint Pixel and Feature-Level Domain Adaptation for Recognition in the Wild
    Tran, Luan
    Sohn, Kihyuk
    Yu, Xiang
    Liu, Xiaoming
    Chandraker, Manmohan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2667 - 2676