Informative Scene Graph Generation via Debiasing

被引:0
|
作者
Gao, Lianli [1 ]
Lyu, Xinyu [2 ]
Guo, Yuyu [1 ]
Hu, Yuxuan [3 ]
Li, Yuan-Fang [4 ]
Xu, Lu [5 ]
Shen, Heng Tao [6 ]
Song, Jingkuan [6 ]
机构
[1] Univ Elect Sci & Technol China, Shenzhen Inst Adv Study, Shenzhen, Peoples R China
[2] Southwestern Univ Finance & Econ, Chengdu, Peoples R China
[3] Southwest Univ, Chongqing, Peoples R China
[4] Monash Univ, Melbourne, Vic, Australia
[5] Kuaishou, Beijing, Peoples R China
[6] Tongji Univ, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Scene graph generation; Visual relationship; Debaising; Information content; SEMANTIC SIMILARITY; ATTENTION;
D O I
10.1007/s11263-025-02365-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Scene graph generation aims to detect visual relationship triplets, (subject, predicate, object). Due to biases in data, current models tend to predict common predicates, e.g., "on" and "at", instead of informative ones, e.g., "standing on" and "looking at". This tendency results in the loss of precise information and overall performance. If a model only uses "stone on road" rather than "stone blocking road" to describe an image, it may be a grave misunderstanding. We argue that this phenomenon is caused by two imbalances: semantic space level imbalance and training sample level imbalance. For this problem, we propose DB-SGG, an effective framework based on debiasing but not the conventional distribution fitting. It integrates two components: Semantic Debiasing (SD) and Balanced Predicate Learning (BPL), for these imbalances. SD utilizes a confusion matrix and a bipartite graph to construct predicate relationships. BPL adopts a random undersampling strategy and an ambiguity removing strategy to focus on informative predicates. Benefiting from the model-agnostic process, our method can be easily applied to SGG models and outperforms Transformer by 136.3%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$136.3\%$$\end{document}, 119.5%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$119.5\%$$\end{document}, and 122.6%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$122.6\%$$\end{document} on mR@20 at three SGG sub-tasks on the SGG-VG dataset. Our method is further verified on another complex SGG dataset (SGG-GQA) and two downstream tasks (sentence-to-graph retrieval and image captioning).
引用
收藏
页数:24
相关论文
共 50 条
  • [1] From General to Specific: Informative Scene Graph Generation via Balance Adjustment
    Guo, Yuyu
    Gao, Lianli
    Wang, Xuanhan
    Hu, Yuxuan
    Xu, Xing
    Lu, Xu
    Shen, Heng Tao
    Song, Jingkuan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16363 - 16372
  • [2] Meta Spatio-Temporal Debiasing for Video Scene Graph Generation
    Xu, Li
    Qu, Haoxuan
    Kuen, Jason
    Gu, Jiuxiang
    Liu, Jun
    COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 374 - 390
  • [3] Not All Relations are Equal: Mining Informative Labels for Scene Graph Generation
    Goel, Arushi
    Fernando, Basura
    Keller, Frank
    Bilen, Hakan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15575 - 15585
  • [4] TD2-Net: Toward Denoising and Debiasing for Dynamic Scene Graph Generation
    Lin, Xin
    Shi, Chong
    Zhan, Yibing
    Yang, Zuopeng
    Wu, Yaqi
    Tao, Dacheng
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3495 - 3503
  • [5] Mining informativeness in scene graphs: Prioritizing informative relations in Scene Graph Generation for enhanced performance in applications
    Neau, Maelic
    Santos, Paulo E.
    Bosser, Anne-Gwenn
    Macvicar, Alistair
    Buche, Cedric
    PATTERN RECOGNITION LETTERS, 2025, 189 : 64 - 70
  • [6] Dynamic Scene Graph Generation via Temporal Prior Inference
    Wang, Shuang
    Gao, Lianli
    Lyu, Xinyu
    Guo, Yuyu
    Zeng, Pengpeng
    Song, Jingkuan
    MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, 2022, : 5793 - 5801
  • [7] A Novel Framework for Scene Graph Generation via Prior Knowledge
    Wang, Zhenghao
    Lian, Jing
    Li, Linhui
    Zhao, Jian
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3768 - 3781
  • [8] Dynamic Scene Graph Generation via Temporal Prior Inference
    Wang, Shuang
    Gao, Lianli
    Lyu, Xinyu
    Guo, Yuyu
    Zeng, Pengpeng
    Song, Jingkuan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5793 - 5801
  • [9] EchoScene: Indoor Scene Generation via Information Echo Over Scene Graph Diffusion
    Zhai, Guangyao
    Oernek, Evin Pinar
    Chen, Dave Zhenyu
    Zhang, Ruotong
    Du, Yan
    Navabi, Nassir
    Tombari, Federico
    Busam, Benjamin
    COMPUTER VISION - ECCV 2024, PT XXI, 2025, 15079 : 167 - 184
  • [10] Unconditional Scene Graph Generation
    Garg, Sarthak
    Dhamo, Helisa
    Farshad, Azade
    Musatian, Sabrina
    Navab, Nassir
    Tombari, Federico
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16342 - 16351