共 44 条
DcTr: Noise-robust point cloud completion by dual-channel transformer with cross-attention
被引:20
|作者:
Fei, Ben
[1
]
Yang, Weidong
[1
,2
]
Ma, Lipeng
[1
]
Chen, Wen-Ming
[3
]
机构:
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai 200433, Peoples R China
[2] Zhuhai Fudan Innovat Inst, Hengqin New Area, Zhuhai 519000, Guangdong, Peoples R China
[3] Acad Engn & Technol, Shanghai 200433, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Point cloud;
3D Vision;
Transformer;
Cross;
-attention;
Dual -channel transformer;
D O I:
10.1016/j.patcog.2022.109051
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Current point cloud completion research mainly utilizes the global shape representation and local features to recover the missing regions of 3D shape for the partial point cloud. However, these methods suffer from inefficient utilization of local features and unstructured points prediction in local patches, hardly resulting in a well-arranged structure for points. To tackle these problems, we propose to employ Dual-channel Transformer and Cross-attention (CA) for point cloud completion (DcTr). The DcTr is apt at using local features and preserving a well-structured generation process. Specifically, the dual-channel transformer leverages point-wise attention and channel-wise attention to summarize the deconvolution patterns used in the previous Dual-channel Transformer Point Deconvolution (DCTPD) stage to produce the deconvolution in the current DCTPD stage. Meanwhile, we employ cross-attention to convey the geometric information from the local regions of incomplete point clouds for the generation of complete ones at different resolutions. In this way, we can generate the locally compact and structured point cloud by capturing the structure characteristic of 3D shape in local patches. Our experimental results indicate that DcTr outperforms the state-of-the-art point cloud completion methods under several benchmarks and is robust to various kinds of noise.
引用
收藏
页数:13
相关论文