共 50 条
R2Net: Residual refinement network for salient object detection
被引:12
|作者:
Zhang, Jin
[1
]
Liang, Qiuwei
[2
,3
]
Guo, Qianqian
[1
]
Yang, Jinyu
[1
]
Zhang, Qing
[1
]
Shi, Yanjiao
[1
]
机构:
[1] Shanghai Inst Technol, Sch Comp Sci & Informat Engn, Shanghai 201418, Peoples R China
[2] Wenzhou Med Univ, Sch Ophthalmol & Optometry, Sch Biomed Engn, Wenzhou 325027, Peoples R China
[3] Wenzhou Med Univ, Hosp Eye, Wenzhou 325027, Peoples R China
基金:
中国国家自然科学基金;
上海市自然科学基金;
关键词:
Deep learning;
Salient object detection;
Multi-scale feature;
Feature fusion;
MODEL;
D O I:
10.1016/j.imavis.2022.104423
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
The multi-scale features and the fusion strategy of contextual features are the keys to salient object detection task. Previous multi-scale-based works often overlooked the completeness of features when acquiring multi scale features. Moreover, the decoders were hard to accurately capture the salient object and refine the object's boundaries simultaneously when in a complex environment, which leads to unsatisfactory saliency maps. To address the above problems, we present a Residual Refinement Network (R(2)Net), which is composed of the Residual Pyramid Module (RPM), the Residual Fusion Module (RFM) and the Feature Optimize Module (FOM), for salient object detection. RPM integrates different feature information through different receptive fields, can not only obtain multi-scale information but also retain the local details information of features. RFM can better locate salient objects and refine the boundaries through the interweaving and fusion of multi-layer features. And FOM is designed to further refine the fused features. Furthermore, we propose a Structural Polishing (SP) loss, which better guides the network through pixel-level supervision, global supervision and boundary supervision to generate high-quality saliency maps with fine boundaries. Experimental results on 6 benchmark datasets demonstrate that the proposed method has superior performance compared with 18 state-of-the-art methods. The code and results of our method are available at https://github.com/zhangjin12138/R(2)Net (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文