Global attention network for collaborative saliency detection

被引:2
|
作者
Li, Ce [1 ]
Xuan, Shuxing [1 ]
Liu, Fenghua [1 ]
Chang, Enbing [1 ]
Wu, Hailei [1 ]
机构
[1] Lanzhou Univ Technol, Coll Elect & Informat Engn, Lanzhou, Gansu, Peoples R China
关键词
Co-saliency; Collaborative correlation; Global information; Attention; MODEL;
D O I
10.1007/s13042-022-01531-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Collaborative saliency (cosaliency) detection aims to identify common and saliency objects or regions in a set of related images. The major challenge to address is how to extract useful information on single images and image groups to express collaborative saliency cues. In this paper, we propose a global attention network for cosaliency detection to extract individual features from the feature enhancement module (FEM). Then to capture useful global information, the global information module (GIM) is applied to all individual features to obtain individual cues, and finally, group collaborative cues are obtained by the collaboration correlation module (CCM). Specifically, the channel attention module and spatial attention module are plugged into the convolution feature network. To increase global context information, we perform global information module (GIM) on the preprocessed features and embed nonlocal modules in the backbone network and adopt global average pooling to extract global semantic representation vector as individual cues. Then, we build a collaborative correlation module (CCM) to extract collaborative and consistent information by calculating the correlation between the individual features of the input image and individual cues in the collaborative correlation module. We evaluate our method on two cosaliency detection benchmark datasets (CoSal2015, iCoSeg). Extensive experiments demonstrate the effectiveness of the proposed model, in most cases our method exceeds the state-of-the-art methods.
引用
收藏
页码:407 / 417
页数:11
相关论文
共 50 条
  • [31] Attention-aware concentrated network for saliency prediction
    Li, Pengqian
    Xing, Xiaofen
    Xu, Xiangmin
    Cai, Bolun
    Cheng, Jun
    NEUROCOMPUTING, 2021, 429 (429) : 199 - 214
  • [32] Global and Multiscale Aggregate Network for Saliency Object Detection in Optical Remote Sensing Images
    Huo, Lina
    Hou, Jiayue
    Feng, Jie
    Wang, Wei
    Liu, Jinsheng
    REMOTE SENSING, 2024, 16 (04)
  • [33] Parallel Feature Network For Saliency Detection
    Fang, Zheng
    Cao, Tieyong
    Yang, Jibin
    Sun, Meng
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2019, E102A (02) : 480 - 485
  • [34] Dense Dilation Network for Saliency Detection
    Fang, Zheng
    Cao, Tieyong
    Yang, Jibin
    Xing, Yibo
    TENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2018), 2019, 11069
  • [35] Saliency Prediction with Relation-Aware Global Attention Module
    Cao, Ge
    Jo, Kang-Hyun
    FRONTIERS OF COMPUTER VISION, IW-FCV 2021, 2021, 1405 : 309 - 316
  • [36] Saliency Attention Based Abnormal Event Detection in Video
    Huan, Wang
    Guo, Huiwen
    Wu, Xinyu
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 1039 - 1043
  • [37] Visual-Patch-Attention-Aware Saliency Detection
    Jian, Muwei
    Lam, Kin-Man
    Dong, Junyu
    Shen, Linlin
    IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (08) : 1575 - 1586
  • [38] Bottom-up saliency detection for attention determination
    Shuzhi Sam Ge
    Hongsheng He
    Zhengchen Zhang
    Machine Vision and Applications, 2013, 24 : 103 - 116
  • [39] Bottom-up saliency detection for attention determination
    Ge, Shuzhi Sam
    He, Hongsheng
    Zhang, Zhengchen
    MACHINE VISION AND APPLICATIONS, 2013, 24 (01) : 103 - 116
  • [40] Global salient information maximization for saliency detection
    Luo, Wang
    Li, Hongliang
    Liu, Guanghui
    Ngan, King Ngi
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2012, 27 (03) : 238 - 248