Towards Accurate Microstructure Estimation via 3D Hybrid Graph Transformer

被引:0
|
作者
Yang, Junqing [1 ]
Jiang, Haotian [2 ]
Tassew, Tewodros [1 ]
Sun, Peng [1 ]
Ma, Jiquan [2 ]
Xia, Yong [1 ]
Yap, Pew-Thian [3 ,4 ]
Chen, Geng [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci & Engn, Natl Engn Lab Integrated Aerosp Ground Ocean Big, Xian, Peoples R China
[2] Heilongjiang Univ, Sch Comp Sci & Technol, Harbin, Peoples R China
[3] Univ N Carolina, Dept Radiol, Chapel Hill, NC USA
[4] Univ N Carolina, Biomed Res Imaging Ctr, Chapel Hill, NC USA
基金
中国国家自然科学基金; 美国国家卫生研究院;
关键词
Microstructure Imaging; Graph Neural Network; Transformer; 3D Spatial Domain; DIFFUSION;
D O I
10.1007/978-3-031-43993-3_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has drawn increasing attention in microstructure estimation with undersampled diffusion MRI (dMRI) data. A representative method is the hybrid graph transformer (HGT), which achieves promising performance by integrating q-space graph learning and x-space transformer learning into a unified framework. However, this method overlooks the 3D spatial information as it relies on training with 2D slices. To address this limitation, we propose 3D hybrid graph transformer (3D-HGT), an advanced microstructure estimation model capable of making full use of 3D spatial information and angular information. To tackle the large computation burden associated with 3D x-space learning, we propose an efficient q-space learning model based on simplified graph neural networks. Furthermore, we propose a 3D x-space learning module based on the transformer. Extensive experiments on data from the human connectome project show that our 3D-HGT outperforms state-of-the-art methods, including HGT, in both quantitative and qualitative evaluations.
引用
收藏
页码:25 / 34
页数:10
相关论文
共 50 条
  • [41] Exploiting Spatial-temporal Relationships for 3D Pose Estimation via Graph Convolutional Networks
    Cai, Yujun
    Ge, Liuhao
    Liu, Jun
    Cai, Jianfei
    Cham, Tat-Jen
    Yuan, Junsong
    Thalmann, Nadia Magnenat
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2272 - 2281
  • [42] Dual-Path Transformer for 3D Human Pose Estimation
    Zhou, Lu
    Chen, Yingying
    Wang, Jinqiao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3260 - 3270
  • [43] SGFormer: Semantic Graph Transformer for Point Cloud-Based 3D Scene Graph Generation
    Lv, Changsheng
    Qi, Mengshi
    Li, Xia
    Yang, Zhengyuan
    Ma, Huadong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4035 - 4043
  • [44] Towards infrared human pose estimation via Transformer
    Zhu, Zhilei
    Dong, Wanli
    Gao, Xiaoming
    Peng, Anjie
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [45] ACCURATE 3D RECONSTRUCTION VIA SURFACE-CONSISTENCY
    Wu, Chenglei
    Cao, Xun
    Dai, Qionghai
    2009 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO, 2009, : 49 - +
  • [46] End-to-end 3D Human Pose Estimation with Transformer
    Zhang, Bowei
    Cui, Peng
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4529 - 4536
  • [47] 3D Human Pose Estimation in Video with Temporal and Spatial Transformer
    Peng, Sha
    Hu, Jiwei
    Proceedings of SPIE - The International Society for Optical Engineering, 2023, 12707
  • [48] 3D Human Gesture Matching Via Graph Cut
    Guo, Tianchu
    Wu, Xiaoyu
    2013 6TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP), VOLS 1-3, 2013, : 675 - 679
  • [49] Towards Accurate 3D Human Body Reconstruction from Silhouettes
    Smith, Brandon M.
    Chari, Visesh
    Agrawal, Amit
    Rehg, James M.
    Sever, Ram
    2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, : 279 - 288
  • [50] 3DMOTFormer: Graph Transformer for Online 3D Multi-Object Tracking
    Ding, Shuxiao
    Rehder, Eike
    Schneider, Lukas
    Cordts, Marius
    Gall, Juergen
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9750 - 9760