Point-to-Spike Residual Learning for Energy-Efficient 3D Point Cloud Classification

被引:0
|
作者
Wu, Qiaoyun [1 ,2 ,3 ]
Zhang, Quanxiao [1 ,2 ,3 ]
Tan, Chunyu [1 ,2 ,3 ]
Zhou, Yun [1 ,4 ]
Sun, Changyin [1 ,2 ,3 ]
机构
[1] Anhui Univ, Sch Artificial Intelligence, Hefei, Peoples R China
[2] Minist Educ, Engn Res Ctr Autonomous Unmanned Syst Technol, Hefei, Peoples R China
[3] Anhui Prov Engn Res Ctr Unmanned Syst & Intellige, Hefei, Peoples R China
[4] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural networks (SNNs) have revolutionized neural learning and are making remarkable strides in image analysis and robot control tasks with ultra-low power consumption advantages. Inspired by this success, we investigate the application of spiking neural networks to 3D point cloud processing. We present a point-to-spike residual learning network for point cloud classification, which operates on points with binary spikes rather than floating-point numbers. Specifically, we first design a spatial-aware kernel point spiking neuron to relate spiking generation to point position in 3D space. On this basis, we then design a 3D spiking residual block for effective feature learning based on spike sequences. By stacking the 3D spiking residual blocks, we build the point-to-spike residual classification network, which achieves low computation cost and low accuracy loss on two benchmark datasets, ModelNet40 and ScanObjectNN. Moreover, the classifier strikes a good balance between classification accuracy and biological characteristics, allowing us to explore the deployment of 3D processing to neuromorphic chips for developing energyefficient 3D robotic perception systems.
引用
收藏
页码:6092 / 6099
页数:8
相关论文
共 50 条
  • [31] 3D meta-classification: A meta-learning approach for selecting 3D point-cloud classification algorithm
    Xu, Fan
    Chen, Jun
    Shi, Yizhou
    Ruan, Tianchen
    Wu, Qihui
    Zhang, Xiaofei
    INFORMATION SCIENCES, 2024, 662
  • [32] 3D meta-classification: A meta-learning approach for selecting 3D point-cloud classification algorithm
    Xu, Fan
    Chen, Jun
    Shi, Yizhou
    Ruan, Tianchen
    Wu, Qihui
    Zhang, Xiaofei
    Information Sciences, 2024, 662
  • [33] Point Encoder GAN: A deep learning model for 3D point cloud inpainting
    Yu, Yikuan
    Huang, Zitian
    Li, Fei
    Zhang, Haodong
    Le, Xinyi
    NEUROCOMPUTING, 2020, 384 : 192 - 199
  • [34] Point-voxel dual stream transformer for 3d point cloud learning
    Zhao, Tianmeng
    Zeng, Hui
    Zhang, Baoqing
    Fan, Bin
    Li, Chen
    VISUAL COMPUTER, 2024, 40 (08): : 5323 - 5339
  • [35] Point Encoder GAN: A deep learning model for 3D point cloud inpainting
    Yu, Yikuan
    Huang, Zitian
    Li, Fei
    Zhang, Haodong
    Le, Xinyi
    Neurocomputing, 2020, 384 : 192 - 199
  • [36] Feature Graph Learning for 3D Point Cloud Denoising
    Hu, Wei
    Gao, Xiang
    Cheung, Gene
    Guo, Zongming
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) : 2841 - 2856
  • [37] DEEP LEARNING FOR SEMANTIC SEGMENTATION OF 3D POINT CLOUD
    Malinverni, E. S.
    Pierdicca, R.
    Paolanti, M.
    Martini, M.
    Morbidoni, C.
    Matrone, F.
    Lingua, A.
    27TH CIPA INTERNATIONAL SYMPOSIUM: DOCUMENTING THE PAST FOR A BETTER FUTURE, 2019, 42-2 (W15): : 735 - 742
  • [38] Point Cloud Annotation Methods for 3D Deep Learning
    O'Mahony, Niall
    Campbell, Sean
    Carvalho, Anderson
    Krpalkova, Lenka
    Riordan, Daniel
    Walsh, Joseph
    2019 13TH INTERNATIONAL CONFERENCE ON SENSING TECHNOLOGY (ICST), 2019,
  • [39] Learning 3D Shape Latent for Point Cloud Completion
    Chen, Zhikai
    Long, Fuchen
    Qiu, Zhaofan
    Yao, Ting
    Zhou, Wengang
    Luo, Jiebo
    Mei, Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8717 - 8729
  • [40] Masked Autoencoders in 3D Point Cloud Representation Learning
    Jiang, Jincen
    Lu, Xuequan
    Zhao, Lizhi
    Dazeley, Richard
    Wang, Meili
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 820 - 831