Feature-representation-transfer based road extraction method for cross-domain aerial images

被引:0
|
作者
Wang S. [1 ]
Mu X. [1 ]
He H. [2 ]
Yang D. [2 ]
Ma C. [1 ]
机构
[1] The Rocket Force University of Engineering, College of Operational Support, Xi'an
[2] The Rocket Force University of Engineering, College of Missile Engineering, Xi'an
来源
Cehui Xuebao/Acta Geodaetica et Cartographica Sinica | 2020年 / 49卷 / 05期
关键词
Deep learning; Encoder-decoder network; Generative adversarial network; Remote sensing; Road extraction; Transfer learning;
D O I
10.11947/j.AGCS.2020.20190274
中图分类号
学科分类号
摘要
Aiming at the problem of the insufficient generalization ability of traditional road extraction methods when applying to a new dataset, this paper proposes a cross-domain road extraction method that realized by feature-representation-transfer and encoder-decoder network. Firstly, a basic road extraction model based on encoder-decoder network is designed to segment the road from a single data source. Then, based on the structure of road extraction network and the principle of cycle-consistent, a cycle generative adversarial network for feature transfer of cross-domain imagery is used, which maps the feature of target city images to the domain of source data. Finally, the pre-trained road extraction model is used to segment the target domain images after the feature transfer, so that the cross-domain road extraction can be realized. The experimental results show that the proposed method improves the generalization ability of the road extraction network and can extract the road target from cross-domain images accurately and effectively. Compared with the results without feature transfer, the proposed method greatly improves the road extraction metric, and increases the F1-score by more than 50%. The proposed method does not require any annotation of the target domain images, nor does it need to fine-tune the road extraction network, while it only need to train the feature transfer model from the target domain to the source domain. Therefore, it has good application value. © 2020, Surveying and Mapping Press. All right reserved.
引用
收藏
页码:611 / 621
页数:10
相关论文
共 26 条
  • [1] Wu L., Hu Y., A survey of automatic road extraction from remote sensing images, Acta Automatica Sinica, 36, 7, pp. 912-922, (2010)
  • [2] Mnih V., Hinton G.E., Learning to detect roads in high-resolution aerial images, Proceedings of European Conference on Computer Vision: Part VI, pp. 210-223, (2010)
  • [3] Mnih V., Machine learning for aerial image labeling, (2013)
  • [4] Wang J., Song J., Chen M., Et al., Road network extraction: a neural-dynamic framework based on deep learning and a finite state machine, International Journal of Remote Sensing, 36, 12, pp. 3144-3169, (2015)
  • [5] Alshehhi R., Marpu P.R., Woon W.L., Et al., Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks, ISPRS Journal of Photogrammetry and Remote Sensing, 130, pp. 139-149, (2017)
  • [6] Long J., Shelhamer E., Darrell T., Fully convolutional networks for semantic segmentation, Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440, (2015)
  • [7] Zhong Z., Li J., Cui W., Et al., Fully convolutional networks for building and road extraction: preliminary results, Proceedings of 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 1591-1594, (2016)
  • [8] Wei Y., Wang Z., Xu M., Road structure refined CNN for road extraction in aerial image, IEEE Geoscience and Remote Sensing Letters, 14, 5, pp. 709-713, (2017)
  • [9] Ronneberger O., Fischer P., Brox T., U-Net: convolutional networks for biomedical image segmentation, Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, (2015)
  • [10] Badrinarayanan V., Kendall A., Cipolla R., SegNet: a deep convolutional encoder-decoder architecture for image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 12, pp. 2481-2495, (2017)