Multi-Task Mixture Density Graph Neural Networks for Predicting Catalyst Performance

被引:18
|
作者
Liang, Chen [1 ,2 ]
Wang, Bowen [3 ]
Hao, Shaogang [4 ]
Chen, Guangyong [5 ]
Heng, Pheng-Ann [3 ]
Zou, Xiaolong [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Mat Res, Shenzhen Geim Graphene Ctr, Tsinghua Shenzhen Int Grad Sch, Shenzhen 518055, Peoples R China
[2] Tsinghua Univ, Inst Mat Res, Tsinghua Shenzhen Int Grad Sch, Shenzhen Key Lab Adv Layered Mat Value Added Appli, Shenzhen 518055, Peoples R China
[3] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong 999077, Peoples R China
[4] Tencent, Shenzhen 518054, Peoples R China
[5] Zhejiang Univ, Zhejiang Lab, Hangzhou 311121, Peoples R China
基金
中国国家自然科学基金;
关键词
catalyst design; CO2 reduction reaction; graph neural network; machine learning; multi-task learning; MACHINE LEARNING FRAMEWORK; CO2; REDUCTION; ELECTROREDUCTION; ELECTROCATALYSTS; SELECTIVITY; REACTIVITY; PRINCIPLE; DISCOVERY; DESIGN; MODEL;
D O I
10.1002/adfm.202404392
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Graph neural networks (GNNs) have drawn more and more attention from material scientists and demonstrated a strong capacity to establish connections between structures and properties. However, with only unrelaxed structures provided as input, few GNN models can predict the thermodynamic properties of relaxed configurations with an acceptable level of error. In this work, a multi-task (MT) architecture based on DimeNet++ and mixture density networks is developed to improve the performance of such task. Taking CO adsorption on Cu-based single-atom alloy catalysts as an example, the method can reliably predict CO adsorption energy with a mean absolute error of 0.087 eV from the initial CO adsorption structures without costly first-principles calculations. Compared to other state-of-the-art GNN methods, the model exhibits improved generalization ability when predicting the catalytic performance of out-of-distribution configurations, built with either unseen substrate surfaces or doping species. Further, the enhancement of expressivity has also been demonstrated on the IS2RE predicting task in the Open Catalyst 2020 project. The proposed MT GNN strategy can facilitate the catalyst discovery and optimization process.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Graph Mixture Density Networks
    Errica, Federico
    Bacciu, Davide
    Micheli, Alessio
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [42] AAGNet: A graph neural network towards multi-task machining feature recognition
    Wu, Hongjin
    Lei, Ruoshan
    Peng, Yibing
    Gao, Liang
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 86
  • [43] Multi-task learning with graph attention networks for multi-domain task-oriented dialogue systems
    Zhao, Meng
    Wang, Lifang
    Jiang, Zejun
    Li, Ronghan
    Lu, Xinyu
    Hu, Zhongtian
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [44] Predicting Auditory Spatial Attention from EEG using Single- and Multi-task Convolutional Neural Networks
    Liu, Zhentao
    Mock, Jeffrey
    Huang, Yufei
    Golob, Edward
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 1298 - 1303
  • [45] Cell tracking using deep neural networks with multi-task learning
    He, Tao
    Mao, Hua
    Guo, Jixiang
    Yi, Zhang
    IMAGE AND VISION COMPUTING, 2017, 60 : 142 - 153
  • [46] Evolutionary Multi-task Learning for Modular Training of Feedforward Neural Networks
    Chandra, Rohitash
    Gupta, Abhishek
    Ong, Yew-Soon
    Goh, Chi-Keong
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 37 - 46
  • [47] Simple, Efficient and Convenient Decentralized Multi-task Learning for Neural Networks
    Pilet, Amaury Bouchra
    Frey, Davide
    Taiani, Francois
    ADVANCES IN INTELLIGENT DATA ANALYSIS XIX, IDA 2021, 2021, 12695 : 37 - 49
  • [48] Adaptive Feature Aggregation in Deep Multi-Task Convolutional Neural Networks
    Cui, Chaoran
    Shen, Zhen
    Huang, Jin
    Chen, Meng
    Xu, Mingliang
    Wang, Meng
    Yin, Yilong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (04) : 2133 - 2144
  • [49] MULTI-TASK LEARNING IN DEEP NEURAL NETWORKS FOR IMPROVED PHONEME RECOGNITION
    Seltzer, Michael L.
    Droppo, Jasha
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6965 - 6969
  • [50] Improving generalization ability of neural networks ensemble with multi-task learning
    State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
    不详
    J. Comput. Inf. Syst., 2006, 4 (1235-1240):