CRS-Diff: Controllable Remote Sensing Image Generation With Diffusion Model

被引:3
|
作者
Tang, Datao [1 ,2 ]
Cao, Xiangyong [1 ,2 ]
Hou, Xingsong [3 ]
Jiang, Zhongyuan [4 ]
Liu, Junmin [5 ]
Meng, Deyu [2 ,5 ,6 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian 710049, Peoples R China
[2] Xi An Jiao Tong Univ, Key Lab Intelligent Networks & Network Secur, Minist Educ, Xian 710049, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Informat & Commun Engn, Xian 710049, Shaanxi, Peoples R China
[4] Xidian Univ, Sch Cyber Engn, Xian 710049, Shaanxi, Peoples R China
[5] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Shaanxi, Peoples R China
[6] Macau Univ Scienceand Technol, Macao Inst Syst Engn, Taipa, Macao, Peoples R China
关键词
Diffusion models; Image synthesis; Image resolution; Text to image; Remote sensing; Training; Task analysis; Controllable generation; deep learning; diffusion model; remote sensing (RS) image;
D O I
10.1109/TGRS.2024.3453414
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
The emergence of generative models has revolutionized the field of remote sensing (RS) image generation. Despite generating high-quality images, existing methods are limited in relying mainly on text control conditions, and thus do not always generate images accurately and stably. In this article, we propose CRS-Diff, a new RS generative framework specifically tailored for RS image generation, leveraging the inherent advantages of diffusion models while integrating more advanced control mechanisms. Specifically, CRS-Diff can simultaneously support text-condition, metadata-condition, and image-condition control inputs, thus enabling more precise control to refine the generation process. To effectively integrate multiple condition control information, we introduce a new conditional control mechanism to achieve multiscale feature fusion (FF), thus enhancing the guiding effect of control conditions. To the best of our knowledge, CRS-Diff is the first multiple-condition controllable RS generative model. Experimental results in single-condition and multiple-condition cases have demonstrated the superior ability of our CRS-Diff to generate RS images both quantitatively and qualitatively compared with previous methods. Additionally, our CRS-Diff can serve as a data engine that generates high-quality training data for downstream tasks, e.g., road extraction. The code is available at https://github.com/Sonettoo/CRS-Diff.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Efficient and Controllable Remote Sensing Fake Sample Generation Based on Diffusion Model
    Yuan, Zhiqiang
    Hao, Chongyang
    Zhou, Ruixue
    Chen, Jialiang
    Yu, Miao
    Zhang, Wenkai
    Wang, Hongqi
    Sun, Xian
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [2] RSDiff: remote sensing image generation from text using diffusion model
    Ahmad Sebaq
    Mohamed ElHelw
    Neural Computing and Applications, 2024, 36 (36) : 23103 - 23111
  • [3] RSVQ-Diffusion Model for Text-to-Remote-Sensing Image Generation
    Gao, Xin
    Fu, Yao
    Jiang, Xiaonan
    Wu, Fanlu
    Zhang, Yu
    Fu, Tianjiao
    Li, Chao
    Pei, Junyan
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [4] UP-Diff: Latent Diffusion Model for Remote Sensing Urban Prediction
    Wang, Zeyu
    Hao, Zecheng
    Zhang, Yuhan
    Feng, Yuchao
    Guo, Yufei
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2025, 22
  • [5] MapGen-Diff: An End-to-End Remote Sensing Image to Map Generator via Denoising Diffusion Bridge Model
    Tian, Jilong
    Wu, Jiangjiang
    Chen, Hao
    Ma, Mengyu
    REMOTE SENSING, 2024, 16 (19)
  • [6] LayoutDiffusion: Controllable Diffusion Model for Layout-to-image Generation
    Zheng, Guangcong
    Zhou, Xianpan
    Li, Xuewei
    Qi, Zhongang
    Shan, Ying
    Li, Xi
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22490 - 22499
  • [7] Trans-Diff: Heterogeneous Domain Adaptation for Remote Sensing Segmentation With Transfer Diffusion
    Kang, Yuhan
    Wu, Jie
    Liu, Qiang
    Yue, Jun
    Fang, Leyuan
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 18413 - 18426
  • [8] Diffusion Self-Guidance for Controllable Image Generation
    Epstein, Dave
    Jabri, Allan
    Poole, Ben
    Efros, Alexei A.
    Holynski, Aleksander
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Filter-Guided Diffusion for Controllable Image Generation
    Gu, Zeqi
    Yang, Ethan
    Davis, Abe
    PROCEEDINGS OF SIGGRAPH 2024 CONFERENCE PAPERS, 2024,
  • [10] Remote sensing image magnification study based on the adaptive mixture diffusion model
    Wang, Xianghai
    Song, Ruoxi
    Zhang, Aidi
    Ai, Xinnan
    Tao, Jingzhe
    INFORMATION SCIENCES, 2018, 467 : 619 - 633