Direction-induced convolution for point cloud analysis

被引:3
|
作者
Fang, Yuan [1 ]
Xu, Chunyan [1 ]
Zhou, Chuanwei [1 ]
Cui, Zhen [1 ]
Hu, Chunlong [2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Jiangsu, Peoples R China
[2] Jiangsu Univ Sci & Technol, Sch Comp Sci & Engn, Zhenjiang 212003, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud; Convolution; Semantic segmentation; Classification; SEGMENTATION; NETWORKS;
D O I
10.1007/s00530-021-00770-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Point cloud analysis becomes a fundamental but challenging problem in the field of 3D scene understanding. To deal with unstructured and unordered point clouds in the embedded 3D space, we propose a novel direction-induced convolution (DIConv) to obtain the hierarchical representations of point clouds and then boost the performance of point cloud analysis. Specifically, we first construct a direction set as the basis of spatial direction information, where its entries can denote these latent direction components of 3D points. For each neighbor point, we can project its direction information into the constructed direction set for achieving an array of direction-dependent weights, then transform its features into the canonical ordered direction set space. After that, the standard image-like convolution can be leveraged to encode the unordered neighborhood regions of point cloud data. We further develop a residual DIConv (Res_DIConv) module and a farthest point sampling residual DIConv (FPS_Res_DIConv) module for jointly capturing the hierarchical features of input point clouds. By alternately stacking Res_DIConv modules and FPS_Res_DIConv modules, a direction-induced convolution network (DICNet) can be built to perform point cloud analysis in an end-to-end fashion. Comprehensive experiments on three benchmark datasets (including ModelNet40, ShapeNet Part, and S3DIS) demonstrate that the proposed DIConv method achieves encouraging performance on both point cloud classification and semantic segmentation tasks.
引用
收藏
页码:457 / 468
页数:12
相关论文
共 50 条
  • [41] Deep Graph Attention Convolution Network for Point Cloud Semantic Segmentation
    Chai Yujing
    Ma Jie
    Liu Hong
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (12)
  • [42] Index edge geometric convolution neural network for point cloud classification
    Zhou P.
    Yang J.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2022, 49 (02): : 207 - 217
  • [43] Point Cloud Registration Network Based on Convolution Fusion and Attention Mechanism
    Wei Zhu
    Yue Ying
    Jin Zhang
    Xiuli Wang
    Yayu Zheng
    Neural Processing Letters, 2023, 55 : 12625 - 12645
  • [44] Point Cloud Registration Network Based on Convolution Fusion and Attention Mechanism
    Zhu, Wei
    Ying, Yue
    Zhang, Jin
    Wang, Xiuli
    Zheng, Yayu
    NEURAL PROCESSING LETTERS, 2023, 55 (09) : 12625 - 12645
  • [45] ECG: Edge-aware Point Cloud Completion with Graph Convolution
    Pan, Liang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (03) : 4392 - 4398
  • [46] DGPoint: A Dynamic Graph Convolution Network for Point Cloud Semantic Segmentation
    Liu Youqun
    Ao Jianfeng
    Pan Zhongtai
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (16)
  • [47] Low-Level Graph Convolution Network for Point Cloud Processing
    Yan, Hongyu
    Wu, Zhihong
    Lu, Li
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT II, 2022, 13530 : 557 - 569
  • [48] IPCONV: Convolution with Multiple Different Kernels for Point Cloud Semantic Segmentation
    Zhang, Ruixiang
    Chen, Siyang
    Wang, Xuying
    Zhang, Yunsheng
    REMOTE SENSING, 2023, 15 (21)
  • [49] LiDAR Point Cloud Image Interpolation via Sep arable Convolution
    Cai, Zheng
    Liang, Junyu
    Hou, Kaiming
    Liu, Shiyue
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 6709 - 6713
  • [50] Dynamic Convolution for 3D Point Cloud Instance Segmentation
    He, Tong
    Shen, Chunhua
    van den Hengel, Anton
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 5697 - 5711