All in One: Multi-Task Prompting for Graph Neural Networks

被引:40
|
作者
Sun, Xiangguo [1 ,2 ]
Cheng, Hong [1 ,2 ]
Li, Jia [3 ]
Liu, Bo [4 ]
Guan, Jihong [5 ]
机构
[1] Chinese Univ Hong Kong, Dept Syst Engn & Engn Management, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Shun Hing Inst Adv Engn, Hong Kong, Peoples R China
[3] Hong Kong Univ Sci & Technol, Data Sci & Analyt Thrust, Guangzhou, Peoples R China
[4] Southeast Univ, Sch Comp Sci & Engn, Purple Mt Labs, Nanjing, Peoples R China
[5] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
基金
国家重点研发计划;
关键词
pre-training; prompt tuning; graph neural networks;
D O I
10.1145/3580305.3599256
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, "pre-training and fine-tuning" has been adopted as a standard workflow for many graph tasks since it can take general graph knowledge to relieve the lack of graph annotations from each application. However, graph tasks with node level, edge level, and graph level are far diversified, making the pre-training pretext often incompatible with these multiple tasks. This gap may even cause a "negative transfer" to the specific application, leading to poor results. Inspired by the prompt learning in natural language processing (NLP), which has presented significant effectiveness in leveraging prior knowledge for various NLP tasks, we study the prompting topic for graphs with the motivation of filling the gap between pre-trained models and various graph tasks. In this paper, we propose a novel multi-task prompting method for graph models. Specifically, we first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern. In this way, the prompting idea from NLP can be seamlessly introduced to the graph area. Then, to further narrow the gap between various graph tasks and state-of-the-art pre-training strategies, we further study the task space of various graph applications and reformulate downstream problems to the graph-level task. Afterward, we introduce meta-learning to efficiently learn a better initialization for the multi-task prompt of graphs so that our prompting framework can be more reliable and general for different tasks. We conduct extensive experiments, results from which demonstrate the superiority of our method.
引用
收藏
页码:2120 / 2131
页数:12
相关论文
共 50 条
  • [21] Multi-Task Spatiotemporal Neural Networks for Structured Surface Reconstruction
    Xu, Mingze
    Fan, Chenyou
    Paden, John D.
    Fox, Geoffrey C.
    Crandall, David J.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1273 - 1282
  • [22] Recommendation Algorithm for Multi-Task Learning with Directed Graph Convolutional Networks
    Yin, Lifeng
    Lu, Jianzheng
    Zheng, Guanghai
    Chen, Huayue
    Deng, Wu
    APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [23] Attention-Aware Multi-Task Convolutional Neural Networks
    Lyu, Kejie
    Li, Yingming
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1867 - 1878
  • [24] Creating CREATE queries with multi-task deep neural networks
    Diker, S. Nazmi
    Sakar, C. Okan
    KNOWLEDGE-BASED SYSTEMS, 2023, 266
  • [25] Multi-Task Learning Based on Stochastic Configuration Neural Networks
    Dong, Xue-Mei
    Kong, Xudong
    Zhang, Xiaoping
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2022, 10
  • [26] Federated Multi-task Graph Learning
    Liu, Yijing
    Han, Dongming
    Zhang, Jianwei
    Zhu, Haiyang
    Xu, Mingliang
    Chen, Wei
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [27] Multi-Task Reinforcement Meta-Learning in Neural Networks
    Shakah, Ghazi
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (07) : 263 - 269
  • [28] Evolving Deep Parallel Neural Networks for Multi-Task Learning
    Wu, Jie
    Sun, Yanan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT II, 2022, 13156 : 517 - 531
  • [29] Brain Networks Classification Based on an Adaptive Multi-Task Convolutional Neural Networks
    Xing X.
    Ji J.
    Yao Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (07): : 1449 - 1459
  • [30] Multi-Adaptive Optimization for multi-task learning with deep neural networks
    Hervella, alvaro S.
    Rouco, Jose
    Novo, Jorge
    Ortega, Marcos
    NEURAL NETWORKS, 2024, 170 : 254 - 265