A Systematic Survey of Chemical Pre-trained Models

被引:0
|
作者
Xia, Jun [1 ]
Zhu, Yanqiao [2 ]
Du, Yuanqi [3 ]
Li, Stan Z. [1 ]
机构
[1] Westlake Univ, Res Ctr Ind Future, Hangzhou, Peoples R China
[2] Univ Calif Los Angeles, Los Angeles, CA USA
[3] Cornell Univ, Ithaca, NY USA
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has achieved remarkable success in learning representations for molecules, which is crucial for various biochemical applications, ranging from property prediction to drug design. However, training Deep Neural Networks (DNNs) from scratch often requires abundant labeled molecules, which are expensive to acquire in the real world. To alleviate this issue, tremendous efforts have been devoted to Chemical Pre-trained Models (CPMs), where DNNs are pre-trained using large-scale unlabeled molecular databases and then fine-tuned over specific downstream tasks. Despite the prosperity, there lacks a systematic review of this fast-growing field. In this paper, we present the first survey that summarizes the current progress of CPMs. We first highlight the limitations of training molecular representation models from scratch to motivate CPM studies. Next, we systematically review recent advances on this topic from several key perspectives, including molecular descriptors, encoder architectures, pre-training strategies, and applications. We also highlight the challenges and promising avenues for future research, providing a useful resource for both machine learning and scientific communities.
引用
收藏
页码:6787 / 6795
页数:9
相关论文
共 50 条
  • [41] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Wang, Xiao
    Chen, Guangyao
    Qian, Guangwu
    Gao, Pengcheng
    Wei, Xiao-Yong
    Wang, Yaowei
    Tian, Yonghong
    Gao, Wen
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (04) : 447 - 482
  • [42] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
    Xiao Wang
    Guangyao Chen
    Guangwu Qian
    Pengcheng Gao
    Xiao-Yong Wei
    Yaowei Wang
    Yonghong Tian
    Wen Gao
    Machine Intelligence Research, 2023, 20 : 447 - 482
  • [43] Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models
    Youssef, Paul
    Koras, Osman Alperen
    Li, Meijie
    Schlotterer, Jorg
    Seifert, Christin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15588 - 15605
  • [44] Probing Pre-Trained Language Models for Disease Knowledge
    Alghanmi, Israa
    Espinosa-Anke, Luis
    Schockaert, Steven
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033
  • [45] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
  • [46] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [47] Text clustering based on pre-trained models and autoencoders
    Xu, Qiang
    Gu, Hao
    Ji, ShengWei
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 17
  • [48] How to train your pre-trained GAN models
    Park, Sung-Wook
    Kim, Jun-Yeon
    Park, Jun
    Jung, Se-Noon
    Sim, Chun-Bo
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27001 - 27026
  • [49] Dynamic Knowledge Distillation for Pre-trained Language Models
    Li, Lei
    Lin, Yankai
    Ren, Shuhuai
    Li, Peng
    Zhou, Jie
    Sun, Xu
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
  • [50] Backdoor Pre-trained Models Can Transfer to All
    Shen, Lujia
    Ji, Shouling
    Zhang, Xuhong
    Li, Jinfeng
    Chen, Jing
    Shi, Jie
    Fang, Chengfang
    Yin, Jianwei
    Wang, Ting
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3141 - 3158