RMFDNet: Redundant and Missing Feature Decoupling Network for salient object detection

被引:0
|
作者
Zhou, Qianwei [1 ,2 ]
Wang, Jintao [1 ,2 ]
Li, Jiaqi [5 ]
Zhou, Chen [1 ]
Hu, Haigen [1 ,2 ]
Hu, Keli [3 ,4 ]
机构
[1] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou 310023, Peoples R China
[2] Key Lab Visual Media Intelligent Proc Technol Zhej, Hangzhou 310023, Peoples R China
[3] Shaoxing Univ, Dept Comp Sci & Engn, Shaoxing 312000, Peoples R China
[4] Hangzhou Med Coll, Affiliated Peoples Hosp, Canc Ctr, Dept Gastroenterol,Zhejiang Prov Peoples Hosp, Hangzhou 310014, Peoples R China
[5] Univ Hong Kong, Pokfulam, Hong Kong, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Salient object detection; Feature decoupling; Depth map; Redundant and Missing Feature Decoupling; Network;
D O I
10.1016/j.engappai.2024.109459
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, many salient object detection methods have utilized edge contours to constrain the solution space. This approach aims to reduce the omission of salient features and minimize the inclusion of non-salient features. To further leverage the potential of edge-related information, this paper proposes a Redundant and Missing Feature Decoupling Network (RMFDNet). RMFDNet primarily consists of a segment decoder, a complement decoder, a removal decoder, and a recurrent repair encoder. The complement and removal decoders are designed to directly predict the missing and redundant features within the segmentation features. These predicted features are then processed by the recurrent repair encoder to refine the segmentation features. Experimental results on multiple Red-Green-Blue (RGB) and Red-Green-Blue-Depth (RGB-D) benchmark datasets, as well as polyp segmentation datasets, demonstrate that RMFDNet significantly outperforms previous state-of-the-art methods across various evaluation metrics. The efficiency, robustness, and generalization capability of RMFDNet are thoroughly analyzed through a carefully designed ablation study. The code will be made available upon paper acceptance.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] A multi-source feature extraction network for salient object detection
    Xu, Kun
    Guo, Jichang
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (35): : 24727 - 24742
  • [22] Attention guided contextual feature fusion network for salient object detection
    Zhang, Jin
    Shi, Yanjiao
    Zhang, Qing
    Cui, Liu
    Chen, Ying
    Yi, Yugen
    IMAGE AND VISION COMPUTING, 2022, 117
  • [23] Multi-pathway feature integration network for salient object detection
    Yao, Zhaojian
    Wang, Luping
    NEUROCOMPUTING, 2021, 461 : 462 - 478
  • [24] Cross-Layer Feature Pyramid Network for Salient Object Detection
    Li, Zun
    Lang, Congyan
    Liew, Jun Hao
    Li, Yidong
    Hou, Qibin
    Feng, Jiashi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4587 - 4598
  • [25] Enhanced Point Feature Network for Point Cloud Salient Object Detection
    Zhang, Ziyan
    Gao, Pan
    Peng, Siyi
    Duan, Chang
    Zhang, Ping
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1617 - 1621
  • [26] Rich-scale feature fusion network for salient object detection
    Sun, Fengming
    Cui, Junjie
    Yuan, Xia
    Zhao, Chunxia
    IET IMAGE PROCESSING, 2023, 17 (03) : 794 - 806
  • [27] Dual-Branch Feature Fusion Network for Salient Object Detection
    Song, Zhehan
    Xu, Zhihai
    Wang, Jing
    Feng, Huajun
    Li, Qi
    PHOTONICS, 2022, 9 (01)
  • [28] DFNet: Discriminative feature extraction and integration network for salient object detection
    Noori, Mehrdad
    Mohammadi, Sina
    Majelan, Sina Ghofrani
    Bahri, Ali
    Havaei, Mohammad
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 89
  • [29] Hierarchical boundary feature alignment network for video salient object detection
    Mao, Amin
    Yan, Jiebin
    Fang, Yuming
    Liu, Hantao
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2025, 109
  • [30] A multi-source feature extraction network for salient object detection
    Kun Xu
    Jichang Guo
    Neural Computing and Applications, 2023, 35 : 24727 - 24742