A Weakly Supervised Semantic Segmentation Method Based on Local Superpixel Transformation

被引:2
|
作者
Ma, Zhiming [1 ]
Chen, Dali [1 ]
Mo, Yilin [1 ]
Chen, Yue [2 ]
Zhang, Yumin [1 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Liaoning, Peoples R China
[2] Northeastern Univ, Coll Med & Biol Informat Engn, Chuangxin St, Shenyang 110819, Liaoning, Peoples R China
基金
中国国家自然科学基金; 中央高校基本科研业务费专项资金资助;
关键词
Weakly supervised learning; Semantic segmentation; Superpixel; Consistency; Class activation mapping; INFORMATION; NETWORKS; IMAGE;
D O I
10.1007/s11063-023-11408-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weakly supervised semantic segmentation (WSSS) can obtain pseudo-semantic masks through a weaker level of supervised labels, reducing the need for costly pixel-level annotations. However, the general class activation map (CAM)-based pseudo-mask acquisition method suffers from sparse coverage, leading to false positive and false negative regions that reduce accuracy. We propose a WSSS method based on local superpixel transformation that combines superpixel theory and image local information. Our method uses a superpixel local consistency weighted cross-entropy loss to correct erroneous regions and a post-processing method based on the adjacent superpixel affinity matrix (ASAM) to expand false negatives, suppress false positives, and optimize semantic boundaries. Our method achieves 73.5% mIoU on the PASCAL VOC 2012 validation set, which is 2.5% higher than our baseline EPS and 73.9% on the test set, and the ASAM post-processing method is validated on several state-of-the-art methods. If our paper is accepted, our code will be published at https://github.com/JimmyMa99/SPL.
引用
收藏
页码:12039 / 12060
页数:22
相关论文
共 50 条
  • [31] Hierarchical Semantic Contrast for Weakly Supervised Semantic Segmentation
    Wu, Yuanchen
    Li, Xiaoqiang
    Dai, Songmin
    Li, Jide
    Liu, Tong
    Xie, Shaorong
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1542 - 1550
  • [32] Weakly Supervised Image Semantic Segmentation Based on Clustering Superpixels
    Yan, Xiong
    Liu, Xiaohua
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [33] Weakly-Supervised Semantic Segmentation Based on Improved CAM
    Yan, Xingya
    Gao, Ying
    Wang, Gaihua
    Lecture Notes on Data Engineering and Communications Technologies, 2022, 89 : 584 - 594
  • [34] Weakly Supervised Semantic Segmentation Based on Semantic Texton Forest and Saliency Prior
    Han Zheng
    Xiao Zhitao
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (03) : 610 - 617
  • [35] Pairwise-Pixel Self-Supervised and Superpixel-Guided Prototype Contrastive Loss for Weakly Supervised Semantic Segmentation
    Xie, Lu
    Li, Weigang
    Zhao, Yuntao
    COGNITIVE COMPUTATION, 2024, 16 (03) : 936 - 948
  • [36] Boosted MIML method for weakly-supervised image semantic segmentation
    Liu, Yang
    Li, Zechao
    Liu, Jing
    Lu, Hanqing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (02) : 543 - 559
  • [37] A Weakly Supervised Semantic Segmentation Method on Lung Adenocarcinoma Histopathology Images
    Lan, Xiaobin
    Mei, Jiaming
    Lin, Ruohan
    Chen, Jiahao
    Zhang, Yanju
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT II, 2023, 14087 : 688 - 698
  • [38] Multi-model Integrated Weakly Supervised Semantic Segmentation Method
    Xiong C.
    Zhi H.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2019, 31 (05): : 800 - 807
  • [39] Boosted MIML method for weakly-supervised image semantic segmentation
    Yang Liu
    Zechao Li
    Jing Liu
    Hanqing Lu
    Multimedia Tools and Applications, 2015, 74 : 543 - 559
  • [40] Coupling Global Context and Local Contents for Weakly-Supervised Semantic Segmentation
    Wang, Chunyan
    Zhang, Dong
    Zhang, Liyan
    Tang, Jinhui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 13483 - 13495