Calibration & Reconstruction: Deep Integrated Language for Referring Image Segmentation

被引:0
|
作者
Yan, Yichen [1 ,2 ]
He, Xingjian [1 ]
Chen, Sihan [2 ]
Liu, Jing [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
referring image segmentation; iterative calibration; language reconstruction;
D O I
10.1145/3652583.3658095
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Referring image segmentation aims to segment an object referred to by natural language expression from an image. The primary challenge lies in the efficient propagation of fine-grained semantic information from textual features to visual features. Many recent works utilize a Transformer to address this challenge. However, conventional transformer decoders can distort linguistic information with deeper layers, leading to suboptimal results. In this paper, we introduce CRFormer, a model that iteratively calibrates multi-modal features in the transformer decoder. We start by generating language queries using vision features, emphasizing different aspects of the input language. Then, we propose a novel Calibration Decoder (CDec) wherein the multi-modal features can iteratively calibrated by the input language features. In the Calibration Decoder, we use the output of each decoder layer and the original language features to generate new queries for continuous calibration, which gradually updates the language features. Based on CDec, we introduce a Language Reconstruction Module and a reconstruction loss. This module leverages queries from the final layer of the decoder to reconstruct the input language and compute the reconstruction loss. This can further prevent the language information from being lost or distorted. Our experiments consistently show the superior performance of our approach across RefCOCO, RefCOCO+, and G-Ref datasets compared to state-of-the-art methods.
引用
收藏
页码:451 / 459
页数:9
相关论文
共 50 条
  • [11] Referring in Language: An Integrated Approach
    Wang, Ying
    Wang, Tianhua
    SOCIAL SEMIOTICS, 2024,
  • [12] Mask prior generation with language queries guided networks for referring image segmentation
    Zhou, Jinhao
    Xiao, Guoqiang
    Lew, Michael S.
    Wu, Song
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2025, 253
  • [13] Prompt-guided bidirectional deep fusion network for referring image segmentation
    Wu, Junxian
    Zhang, Yujia
    Kampffmeyer, Michael
    Zhao, Xiaoguang
    NEUROCOMPUTING, 2025, 616
  • [14] Hierarchical collaboration for referring image segmentation
    Zhang, Wei
    Cheng, Zesen
    Chen, Jie
    Gao, Wen
    NEUROCOMPUTING, 2025, 613
  • [15] Toward Robust Referring Image Segmentation
    Wu, Jianzong
    Li, Xiangtai
    Li, Xia
    Ding, Henghui
    Tong, Yunhai
    Tao, Dacheng
    IEEE Transactions on Image Processing, 2024, 33 : 1782 - 1794
  • [16] Toward Robust Referring Image Segmentation
    Wu, Jianzong
    Li, Xiangtai
    Li, Xia
    Ding, Henghui
    Tong, Yunhai
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1782 - 1794
  • [17] Mask Grounding for Referring Image Segmentation
    Chng, Yong Xien
    Zheng, Henry
    Han, Yizeng
    Qiu, Xuchong
    Huang, Gao
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 26563 - 26573
  • [18] Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation
    Xu, Zunnan
    Chen, Zhihong
    Zhang, Yong
    Song, Yibing
    Wan, Xiang
    Li, Guanbin
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17457 - 17466
  • [19] Deep Learning for Joint Image Reconstruction and Segmentation for SAR
    Kazemi, Samia
    Yazici, Birsen
    2020 IEEE INTERNATIONAL RADAR CONFERENCE (RADAR), 2020, : 890 - 894
  • [20] Language as Queries for Referring Video Object Segmentation
    Wu, Jiannan
    Jiang, Yi
    Sun, Peize
    Yuan, Zehuan
    Luo, Ping
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4964 - 4974