Unified pre-training for program understanding and generation

被引:0
|
作者
Ahmad, Wasi Uddin [1 ]
Chakraborty, Saikat [2 ]
Ray, Baishakhi [2 ]
Chang, Kai-Wei [1 ]
机构
[1] University of California, Los Angeles, United States
[2] Columbia University, United States
来源
arXiv | 2021年
关键词
Broad spectrum - Code translation - Language generation - Legacy code - Natural languages - Pre-training - Program generation - Program understanding - Sequence models - Summarization and generations;
D O I
暂无
中图分类号
学科分类号
摘要
57
引用
收藏
相关论文
共 50 条
  • [21] All in One: Exploring Unified Video-Language Pre-training
    Wang, Jinpeng
    Ge, Yixiao
    Yan, Rui
    Ge, Yuying
    Lin, Kevin Qinghong
    Tsutsui, Satoshi
    Lin, Xudong
    Cai, Guanyu
    Wu, Jianping
    Shan, Ying
    Qie, Xiaohu
    Shou, Mike Zheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6598 - 6608
  • [22] Unified Speech-Text Pre-training for Speech Translation and Recognition
    Tang, Yun
    Gong, Hongyu
    Dong, Ning
    Wang, Changhan
    Hsu, Wei-Ning
    Gu, Jiatao
    Baevski, Alexei
    Li, Xian
    Mohamed, Abdelrahman
    Auli, Michael
    Pino, Juan
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1488 - 1499
  • [23] UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
    Lee, Janghyeon
    Kim, Jongsuk
    Shon, Hyounguk
    Kim, Bumsoo
    Kim, Seung Hwan
    Lee, Honglak
    Kim, Junmo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [24] UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
    Li, Zhaowen
    Zhu, Yousong
    Yang, Fan
    Li, Wei
    Zhao, Chaoyang
    Chen, Yingying
    Chen, Zhiyang
    Xie, Jiahao
    Wu, Liwei
    Zhao, Rui
    Tang, Ming
    Wang, Jinqiao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14607 - 14616
  • [25] UniTE: A Survey and Unified Pipeline for Pre-Training Spatiotemporal Trajectory Embeddings
    Lin, Yan
    Zhou, Zeyu
    Liu, Yicheng
    Lv, Haochen
    Wen, Haomin
    Li, Tianyi
    Li, Yushuai
    Jensen, Christian S.
    Guo, Shengnan
    Lin, Youfang
    Wan, Huaiyu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (03) : 1475 - 1494
  • [26] UniXcoder: Unified Cross-Modal Pre-training for Code Representation
    Guo, Daya
    Lu, Shuai
    Duan, Nan
    Wang, Yanlin
    Zhou, Ming
    Yin, Jian
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 7212 - 7225
  • [27] Pre-training to Match for Unified Low-shot Relation Extraction
    Liu, Fangchao
    Lin, Hongyu
    Han, Xianpei
    Cao, Boxi
    Sun, Le
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5785 - 5795
  • [28] Unified Vision-Language Pre-Training for Image Captioning and VQA
    Zhou, Luowei
    Palangi, Hamid
    Zhang, Lei
    Hu, Houdong
    Corso, Jason J.
    Gao, Jianfeng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13041 - 13049
  • [29] LayoutLM: Pre-training of Text and Layout for Document Image Understanding
    Xu, Yiheng
    Li, Minghao
    Cui, Lei
    Huang, Shaohan
    Wei, Furu
    Zhou, Ming
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1192 - 1200
  • [30] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    Devlin, Jacob
    Chang, Ming-Wei
    Lee, Kenton
    Toutanova, Kristina
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 4171 - 4186