Camouflaged object detection with counterfactual intervention

被引:8
|
作者
Li, Xiaofei [1 ]
Li, Hongying [1 ]
Zhou, Hao [2 ]
Yu, Miaomiao [1 ]
Chen, Dong [3 ]
Li, Shuohao [1 ]
Zhang, Jun [1 ]
机构
[1] Natl Univ Def Technol, Lab Big Data & Decis, 109 Deya Rd, Changsha 410003, Hunan, Peoples R China
[2] Naval Univ Engn, Dept Operat & Planning, 717 Jianshe Ave, Wuhan 430033, Hubei, Peoples R China
[3] Natl Univ Def Technol, Sci & Technol Informat Syst Engn Lab, 109 Deya Rd, Changsha 410003, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Camouflaged object detection; Texture-aware; Context-aware; Counterfactual intervention; SEGMENTATION; NETWORK;
D O I
10.1016/j.neucom.2023.126530
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Camouflaged object detection (COD) aims to identify camouflaged objects hiding in their surroundings, which is a valuable yet challenging task. The main challenge is that there are ambiguous semantic biases in the camouflaged object datasets, which affect the results of COD. To address this challenge, we design a counter-factual intervention network (CINet) to mitigate the influences of ambiguous semantic biases and obtain accurate COD. Specifically, our CINet consists of three key modules, i.e., texture-aware interaction module (TIM), context-aware fusion module (CFM), and counterfactual intervention module (CIM). The TIM is designed to extract the refined textures for accurate localization, the CFM is proposed to fuse the multi-scale contextual features to enhance the detection performance, and the CIM is presented to learn more effective textures and make unbiased predictions. Unlike most existing COD methods that directly capture contextual features through the final loss function, we develop a counterfactual intervention strategy to learn more effective contextual textures. Extensive experiments on four challenging benchmark datasets demonstrate that our CINet significantly outperforms 31 state-of-the-art methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Conditional Diffusion Models for Camouflaged and Salient Object Detection
    Sun, Ke
    Chen, Zhongxi
    Lin, Xianming
    Sun, Xiaoshuai
    Liu, Hong
    Ji, Rongrong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 2833 - 2848
  • [42] Depth alignment interaction network for camouflaged object detection
    Hongbo Bi
    Yuyu Tong
    Jiayuan Zhang
    Cong Zhang
    Jinghui Tong
    Wei Jin
    Multimedia Systems, 2024, 30
  • [43] Feature Aggregation and Propagation Network for Camouflaged Object Detection
    Zhou, Tao
    Zhou, Yi
    Gong, Chen
    Yang, Jian
    Zhang, Yu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 7036 - 7047
  • [44] Camouflaged Object Detection with a Feature Lateral Connection Network
    Wang, Tao
    Wang, Jian
    Wang, Ruihao
    ELECTRONICS, 2023, 12 (12)
  • [45] Ternary symmetric fusion network for camouflaged object detection
    Yangyang Deng
    Jianxin Ma
    Yajun Li
    Min Zhang
    Li Wang
    Applied Intelligence, 2023, 53 : 25216 - 25231
  • [46] OAFormer: Occlusion Aware Transformer for Camouflaged Object Detection
    Yang, Xin
    Zhu, Hengliang
    Mao, Guojun
    Xing, Shuli
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1421 - 1426
  • [47] MGL: Mutual Graph Learning for Camouflaged Object Detection
    Zhai, Qiang
    Li, Xin
    Yang, Fan
    Jiao, Zhicheng
    Luo, Ping
    Cheng, Hong
    Liu, Zicheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1897 - 1910
  • [48] Denoising Diffusion Implicit Model for Camouflaged Object Detection
    Cai, Wei
    Gao, Weijie
    Jiang, Xinhao
    Wang, Xin
    Di, Xingyu
    ELECTRONICS, 2024, 13 (18)
  • [49] A three-stage model for camouflaged object detection
    Chen, Tianyou
    Ruan, Hui
    Wang, Shaojie
    Xiao, Jin
    Hu, Xiaoguang
    NEUROCOMPUTING, 2025, 614
  • [50] ForegroundNet: Domain Adaptive Transformer for Camouflaged Object Detection
    Liu, Zhouyong
    Luo, Shun
    Sun, Shilei
    Li, Chunguo
    Huang, Yongming
    Yang, Luxi
    IEEE SENSORS JOURNAL, 2024, 24 (14) : 21972 - 21986