Decoupled differentiable graph neural architecture search

被引:0
|
作者
Chen, Jiamin [1 ]
Gao, Jianliang [1 ]
Wu, Zhenpeng [1 ]
Al-Sabri, Raeed [1 ]
Oloulade, Babatounde Moctard [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural architecture search; Decoupled differentiable optimization; Supernet pruning; Graph neural network;
D O I
10.1016/j.ins.2024.120700
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The differentiable graph neural architecture search (GNAS) effectively designs graph neural networks (GNNs) efficiently and automatically with excellent performance based on different graph data distributions. Given a GNN search space containing multiple GNN component operation candidates, the differentiable GNAS method builds a mixed supernet using learnable architecture parameters multiplied by the GNN component operation candidates. When the mixed supernet completes optimization, the mixed supernet is pruned based on the best architecture parameters to efficiently identify the optimal GNN architecture in the GNN search space. However, the multiplicative relationship between the architecture parameters and the GNN component operation candidates introduces a coupled optimization bias into the weight optimization process of the mixed supernet GNN component operation candidates. This bias results in differentiable GNAS performance degradation. To solve the problem of coupled optimization bias in the previous differentiable GNAS method, we propose the D ecoupled D ifferentiable G raph N eural A rchitecture S earch (D 2 GNAS). It utilizes the Gumbel distribution as a bridge to decouple the weights optimization of supernet GNN component candidate operation and architecture parameters for constructing the decoupled differentiable GNN architecture sampler. The sampler is capable of selecting promising GNN architectures based on architecture parameters treated as sampling probabilities, and it is further optimized through the validation gradients derived from the sampled GNN architectures. Simultaneously, D 2 GNAS builds a singlepath supernet with a pruning strategy to compress the supernet progressively to improve search efficiency further. We conduct extensive experiments on multiple benchmark graphs. The experimental findings demonstrate that D 2 GNAS outperforms all established baseline methods, both manual GNN and GNAS methods, in terms of performance. Additionally, D 2 GNAS has a lower time complexity than previous differentiable GNAS methods. Based on the fair GNN search space, it achieves an average 5x efficiency improvement. Codes are available at https:// github.com/AutoMachine0/D2GNAS.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Regularized Differentiable Architecture Search
    Wang, Lanfei
    Xie, Lingxi
    Zhao, Kaili
    Guo, Jun
    Tian, Qi
    IEEE EMBEDDED SYSTEMS LETTERS, 2023, 15 (03) : 129 - 132
  • [32] The limitations of differentiable architecture search
    Guillaume, Lacharme
    Hubert, Cardot
    Christophe, Lente
    Nicolas, Monmarche
    PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (02)
  • [33] Group Differentiable Architecture Search
    Shen, Chaoyuan
    Xu, Jinhua
    IEEE ACCESS, 2021, 9 : 76585 - 76591
  • [34] Differentiable Neural Architecture, Mixed Precision and Accelerator Co-Search
    Chitty-Venkata, Krishna Teja
    Bian, Yiming
    Emani, Murali
    Vishwanath, Venkatram
    Somani, Arun K.
    IEEE ACCESS, 2023, 11 : 106670 - 106687
  • [35] Neural Network Architecture Search with Differentiable Cartesian Genetic Programming for Regression
    Martens, Marcus
    Izzo, Dario
    PROCEEDINGS OF THE 2019 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION (GECCCO'19 COMPANION), 2019, : 181 - 182
  • [36] E-DNAS: Differentiable Neural Architecture Search for Embedded Systems
    Garcia Lopez, Javier
    Agudo, Antonio
    Moreno-Noguer, Francesc
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 4704 - 4711
  • [37] Partial Connection Based on Channel Attention for Differentiable Neural Architecture Search
    Xue, Yu
    Qin, Jiafeng
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (05) : 6804 - 6813
  • [38] HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search
    Nayman, Niv
    Aflalo, Yonathan
    Noy, Asaf
    Zelnik, Lihi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [39] DIPO: Differentiable Parallel Operation Blocks for Surgical Neural Architecture Search
    Lee, Matthew
    Sanchez-Matilla, Ricardo
    Stoyanov, Danail
    Luengo, Imanol
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5540 - 5550
  • [40] Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search
    Yang, Yibo
    You, Shan
    Li, Hongyang
    Wang, Fei
    Qian, Chen
    Lin, Zhouchen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 6663 - 6672