Learning Visual Prior via Generative Pre-Training

被引:0
|
作者
Xie, Jinheng [1 ]
Ye, Kai [2 ]
Li, Yudong [2 ]
Li, Yuexiang [3 ]
Lin, Kevin Qinghong [1 ]
Zheng, Yefeng [3 ]
Shen, Linlin [2 ]
Shou, Mike Zheng [1 ]
机构
[1] Natl Univ Singapore, Show Lab, Singapore, Singapore
[2] Shenzhen Univ, Shenzhen, Peoples R China
[3] Tencent YouTu Lab, Jarvis Res Ctr, Shenzhen, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Various stuff and things in visual data possess specific traits, which can be learned by deep neural networks and are implicitly represented as the visual prior, e.g., object location and shape, in the model. Such prior potentially impacts many vision tasks. For example, in conditional image synthesis, spatial conditions failing to adhere to the prior can result in visually inaccurate synthetic results. This work aims to explicitly learn the visual prior and enable the customization of sampling. Inspired by advances in language modeling, we propose to learn Visual prior via Generative Pre-Training, dubbed VISORGPT. By discretizing visual locations, e.g., bounding boxes, human pose, and instance masks, into sequences, VISORGPT can model visual prior through likelihood maximization. Besides, prompt engineering is investigated to unify various visual locations and enable customized sampling of sequential outputs from the learned prior. Experimental results demonstrate the effectiveness of VISORGPT in modeling visual prior and extrapolating to novel scenes, potentially motivating that discrete visual locations can be integrated into the learning paradigm of current language models to further perceive visual world.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Masked Channel Modeling for Bootstrapping Visual Pre-training
    Liu, Yang
    Wang, Xinlong
    Zhu, Muzhi
    Cao, Yue
    Huang, Tiejun
    Shen, Chunhua
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (02) : 760 - 780
  • [42] Improved OOD Generalization via Adversarial Training and Pre-training
    Yi, Mingyangi
    Hou, Lu
    Sun, Jiacheng
    Shang, Lifeng
    Jiang, Xin
    Liu, Qun
    Ma, Zhi-Ming
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [43] NUWA: Visual Synthesis Pre-training for Neural visUal World creAtion
    Wu, Chenfei
    Liang, Jian
    Ji, Lei
    Yang, Fan
    Fang, Yuejian
    Jiang, Daxin
    Duan, Nan
    COMPUTER VISION - ECCV 2022, PT XVI, 2022, 13676 : 720 - 736
  • [44] DeviceGPT: A Generative Pre-Training Transformer on the Heterogenous Graph for Internet of Things
    Ren, Yimo
    Wang, Jinfa
    Li, Hong
    Zhu, Hongsong
    Sun, Limin
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 1929 - 1933
  • [45] Conditional Variational Autoencoder with Balanced Pre-training for Generative Adversarial Networks
    Yao, Yuchong
    Wang, Xiaohui
    Ma, Yuanbang
    Fang, Han
    Wei, Jiaying
    Chen, Liyuan
    Anaissi, Ali
    Braytee, Ali
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 156 - 165
  • [46] GPT-GNN: Generative Pre-Training of Graph Neural Networks
    Hu, Ziniu
    Dong, Yuxiao
    Wang, Kuansan
    Chang, Kai-Wei
    Sun, Yizhou
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1857 - 1867
  • [47] Generative Pre-training for Paraphrase Generation by Representing and Predicting Spans in Exemplars
    Bui, Tien-Cuong
    Le, Van-Duc
    To, Hai-Thien
    Cha, Sang Kyun
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2021), 2021, : 83 - 90
  • [48] Predicting City Origin-Destination Flow with Generative Pre-training
    Zhang, Mingwei
    Gao, Lizhong
    Wang, Qiao
    Gao, Weihao
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT IX, 2024, 15024 : 233 - 245
  • [49] Insights into Pre-training via Simpler Synthetic Tasks
    Wu, Yuhuai
    Li, Felix
    Liang, Percy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [50] Text-Guided HuBERT: Self-Supervised Speech Pre-Training via Generative Adversarial Networks
    Ma, Duo
    Yue, Xianghu
    Ao, Junyi
    Gao, Xiaoxue
    Li, Haizhou
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2055 - 2059