Unified pre-training for program understanding and generation

被引:0
|
作者
Ahmad, Wasi Uddin [1 ]
Chakraborty, Saikat [2 ]
Ray, Baishakhi [2 ]
Chang, Kai-Wei [1 ]
机构
[1] University of California, Los Angeles, United States
[2] Columbia University, United States
来源
arXiv | 2021年
关键词
Broad spectrum - Code translation - Language generation - Legacy code - Natural languages - Pre-training - Program generation - Program understanding - Sequence models - Summarization and generations;
D O I
暂无
中图分类号
学科分类号
摘要
57
引用
收藏
相关论文
共 50 条
  • [31] Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
    Ning, Yuting
    Huang, Zhenya
    Lin, Xin
    Chen, Enhong
    Tong, Shiwei
    Gong, Zheng
    Wang, Shijin
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13409 - 13418
  • [32] PRE-TRAINING FOR QUERY REWRITING IN A SPOKEN LANGUAGE UNDERSTANDING SYSTEM
    Chen, Zheng
    Fan, Xing
    Ling, Yuan
    Mathias, Lambert
    Guo, Chenlci
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7969 - 7973
  • [33] JAKET: Joint Pre-training of Knowledge Graph and Language Understanding
    Yu, Donghan
    Zhu, Chenguang
    Yang, Yiming
    Zeng, Michael
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 11630 - 11638
  • [34] Understanding the Effects of Pre-Training for Object Detectors via Eigenspectrum
    Shinya, Yosuke
    Simo-Serra, Edgar
    Suzuki, Taiji
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1931 - 1941
  • [35] ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding
    Sun, Yu
    Wang, Shuohuan
    Li, Yukun
    Feng, Shikun
    Tian, Hao
    Wu, Hua
    Wang, Haifeng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8968 - 8975
  • [36] MRBERT: Pre-Training of Melody and Rhythm for Automatic Music Generation
    Li, Shuyu
    Sung, Yunsick
    MATHEMATICS, 2023, 11 (04)
  • [37] Dynamic Scene Graph Generation via Anticipatory Pre-training
    Li, Yiming
    Yang, Xiaoshan
    Xu, Changsheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13864 - 13873
  • [38] MASS: Masked Sequence to Sequence Pre-training for Language Generation
    Song, Kaitao
    Tan, Xu
    Qin, Tao
    Lu, Jianfeng
    Liu, Tie-Yan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [39] General then Personal: Decoupling and Pre-training for Personalized Headline Generation
    Song, Yun-Zhu
    Chen, Yi-Syuan
    Wang, Lu
    Shuai, Hong-Han
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2023, 11 : 1588 - 1607
  • [40] Efficient Pre-training for Localized Instruction Generation of Procedural Videos
    Batra, Anil
    Moltisanti, Davide
    Sevilla-Lara, Laura
    Rohrbach, Marcus
    Keller, Frank
    COMPUTER VISION - ECCV 2024, PT XXXIX, 2025, 15097 : 347 - 363