Perspective on explainable SAR target recognition

被引:0
|
作者
Guo W. [1 ]
Zhang Z. [2 ]
Yu W. [2 ]
Sun X. [1 ]
机构
[1] Center of Digital Innovation, Tongji University, Shanghai
[2] Shanghai Key Lab of Intelligent Sensing and Recognition, Shanghai Jiaotong University, Shanghai
基金
中国国家自然科学基金;
关键词
Automatic Target Recognition (ATR); Deep learning; Explainability and interpretability; Explainable machine learning; SAR;
D O I
10.12000/JR20059
中图分类号
学科分类号
摘要
SAR Automatic Target Recognition (ATR) is a key task in microwave remote sensing. Recently, Deep Neural Networks (DNNs) have shown promising results in SAR ATR. However, despite the success of DNNs, their underlying reasoning and decision mechanisms operate essentially like a black box and are unknown to users. This lack of transparency and explainability in SAR ATR pose a severe security risk and reduce the users' trust in and the verifiability of the decision-making process. To address these challenges, in this paper, we argue that research on the explainability and interpretability of SAR ATR is necessary to enable development of interpretable SAR ATR models and algorithms, and thereby, improve the validity and transparency of AI-based SAR ATR systems. First, we present recent developments in SAR ATR, note current practical challenges, and make a plea for research to improve the explainability and interpretability of SAR ATR. Second, we review and summarize recent research in and practical applications of explainable machine learning and deep learning. Further, we discuss aspects of explainable SAR ATR with respect to model understanding, model diagnosis, and model improvement toward a better understanding of the internal representations and decision mechanisms. Moreover, we emphasize the need to exploit interpretable SAR feature learning and recognition models that integrate SAR physical characteristics and domain knowledge. Finally, we draw our conclusion and suggest future work for SAR ATR that combines data and knowledge-driven methods, human-computer cooperation, and interactive deep learning. © 2020 Institute of Electronics Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:462 / 476
页数:14
相关论文
共 80 条
  • [1] JIN Yaqiu, Multimode remote sensing intelligent information and target recognition: Physical intelligence of microwave vision, Journal of Radars, 8, 6, pp. 710-716, (2019)
  • [2] KEYDEL E R, LEE S W, MOORE J T., MSTAR extended operating conditions: A tutorial, SPIE Volume 2757, Algorithms for Synthetic Aperture Radar Imagery III, (1996)
  • [3] ZHAO Juanping, GUO Weiwei, ZHANG Zenghui, Et al., A coupled convolutional neural network for small and densely clustered ship detection in SAR images, Science China Information Sciences, 62, 4, (2019)
  • [4] DU Lan, WANG Zhaocheng, WANG Yan, Et al., Survey of research progress on target detection and discrimination of single-channel SAR images for complex scenes, Journal of Radars, 9, 1, pp. 34-54, (2020)
  • [5] XU Feng, WANG Haipeng, JIN Yaqiu, Deep learning as applied in SAR target recognition and terrain classification, Journal of Radars, 6, 2, pp. 136-148, (2017)
  • [6] SELVARAJU R R, COGSWELL M, DAS A, Et al., Grad-CAM: Visual explanations from deep networks via gradient-based localization, International Journal of Computer Vision, 128, 2, pp. 336-359, (2020)
  • [7] GOODFELLOW I J, SHLENS J, SZEGEDY C., Explaining and harnessing adversarial examples, 2015 International Conference on Learning Representations, (2015)
  • [8] JI Shouling, LI Jinfeng, DU Tianyu, Et al., Survey on techniques, applications and security of machine learning interpretability, Journal of Computer Research and Development, 56, 10, pp. 2071-2096, (2019)
  • [9] WU Fei, LIAO Binbing, HAN Yahong, Interpretability for deep learning, Aero Weaponry, 26, 1, pp. 39-46, (2019)
  • [10] GUIDOTTI R, MONREALE A, RUGGIERI S, Et al., A survey of methods for explaining black box models, ACM Computing Surveys, 51, 5, (2018)