CROSS-DOMAIN SAR SHIP DETECTION IN STRONG INTERFERENCE ENVIRONMENT BASED ON IMAGE-TO-IMAGE TRANSLATION

被引:1
|
作者
Pu, Xinyang [1 ]
Jia, Hecheng [1 ]
Xu, Feng [1 ]
机构
[1] Fudan Univ, Key Lab Informat Sci Elect Waves, MoE, Shanghai 200433, Peoples R China
关键词
Object detection; Unsupervised domain adaptation; Image-to-image translation; Generative Adversarial Networks; Synthetic Aperture Radar;
D O I
10.1109/IGARSS52108.2023.10282746
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
The model performance of object detection task may dramatically deteriorate when meeting the new dataset with discrepant data distribution compared with trained images. Especially for Synthetic Aperture Radar (SAR) images, the complicated imaging mechanism and diverse environments probably induce intense changes in image appearance and hurt the detection capability and robustness of models based on deep learning. In this paper, a method of learning strong interference characteristics of SAR images is proposed and conducted to generate artificial SAR images as extra training samples in the downstream task -- object detection to improve the detection accuracy and decrease the missing rate of models. Our approach utilized as a data augmentation strategy without annotation cost is confirmed to be efficacious and reliable by multiple experiments.
引用
收藏
页码:1798 / 1801
页数:4
相关论文
共 50 条
  • [1] Image-to-image translation for cross-domain disentanglement
    Gonzalez-Garcia, Abel
    van de Weijer, Joost
    Bengio, Yoshua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] Cross-Domain Interpolation for Unpaired Image-to-Image Translation
    Lopez, Jorge
    Mauricio, Antoni
    Diaz, Jose
    Camara, Guillermo
    COMPUTER VISION SYSTEMS (ICVS 2019), 2019, 11754 : 542 - 551
  • [3] Cross-Domain Infrared Image Classification via Image-to-Image Translation and Deep Domain Generalization
    Guo, Zhao-Rui
    Niu, Jia-Wei
    Liu, Zhun-Ga
    2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 487 - 493
  • [4] Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night
    Arruda, Vinicius F.
    Paixao, Thiago M.
    Berriel, Rodrigo F.
    De Souza, Alberto F.
    Badue, Claudine
    Sebe, Nicu
    Oliveira-Santos, Thiago
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [5] Image-To-Image Translation Using a Cross-Domain Auto-Encoder and Decoder
    Yoo, Jaechang
    Eom, Heesong
    Choi, Yong Suk
    APPLIED SCIENCES-BASEL, 2019, 9 (22):
  • [6] Learning Unsupervised Cross-domain Image-to-Image Translation using a Shared Discriminator
    Kumar, Rajiv
    Dabral, Rishabh
    Sivakumar, G.
    VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP, 2021, : 256 - 264
  • [7] Diffusion Models for Cross-Domain Image-to-Image Translation with Paired and Partially Paired Datasets
    Bell, Trisk
    Li, Dan
    2024 IEEE 11TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS, DSAA 2024, 2024, : 38 - 45
  • [8] CACOLIT: Cross-domain Adaptive Co-learning for Imbalanced Image-to-Image Translation
    Wang, Yijun
    Liang, Tao
    Lin, Jianxin
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1068 - 1076
  • [9] CDTD: A Large-Scale Cross-Domain Benchmark for Instance-Level Image-to-Image Translation and Domain Adaptive Object Detection
    Zhiqiang Shen
    Mingyang Huang
    Jianping Shi
    Zechun Liu
    Harsh Maheshwari
    Yutong Zheng
    Xiangyang Xue
    Marios Savvides
    Thomas S. Huang
    International Journal of Computer Vision, 2021, 129 : 761 - 780
  • [10] CDTD: A Large-Scale Cross-Domain Benchmark for Instance-Level Image-to-Image Translation and Domain Adaptive Object Detection
    Shen, Zhiqiang
    Huang, Mingyang
    Shi, Jianping
    Liu, Zechun
    Maheshwari, Harsh
    Zheng, Yutong
    Xue, Xiangyang
    Savvides, Marios
    Huang, Thomas S.
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (03) : 761 - 780