TY - GEN
T1 - 3D Unsupervised Region-Aware Registration Transformer
AU - Hao, Yu
AU - Fang, Yi
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - This paper concerns the research problem of point cloud registration to find the rigid transformation to optimally align the source point set with the target one. Learning robust point cloud registration models with deep neural networks has emerged as a powerful paradigm, offering promising performance in predicting the global geometric transformation for a pair of point sets. Existing methods first leverage an encoder to regress the global shape descriptor, which is then decoded into a shape-conditioned transformation via concatenation-based conditioning. However, different regions of a 3D shape vary in their geometric structures which makes it more sense that we have a region-conditioned transformation instead of the shape-conditioned one. In this paper, we define our 3D registration function through the introduction of a new design of 3D region partition module that is able to divide the input shape to different regions with a self-supervised 3D shape reconstruction loss without the need for ground truth labels. We further propose the 3D shape transformer module to efficiently and effectively capture short-and long-range geometric dependencies for regions on the 3D shape Consequently, the region-aware decoder module is proposed to predict the transformations for different regions respectively. The global geometric transformation from the source point set to the target one is then formed by the weighted fusion of region-aware transformation. Compared to the state-of-the-art approaches, our experiments show that our 3D-URRT achieves superior registration performance over various benchmark datasets (e.g. ModelNet40).
AB - This paper concerns the research problem of point cloud registration to find the rigid transformation to optimally align the source point set with the target one. Learning robust point cloud registration models with deep neural networks has emerged as a powerful paradigm, offering promising performance in predicting the global geometric transformation for a pair of point sets. Existing methods first leverage an encoder to regress the global shape descriptor, which is then decoded into a shape-conditioned transformation via concatenation-based conditioning. However, different regions of a 3D shape vary in their geometric structures which makes it more sense that we have a region-conditioned transformation instead of the shape-conditioned one. In this paper, we define our 3D registration function through the introduction of a new design of 3D region partition module that is able to divide the input shape to different regions with a self-supervised 3D shape reconstruction loss without the need for ground truth labels. We further propose the 3D shape transformer module to efficiently and effectively capture short-and long-range geometric dependencies for regions on the 3D shape Consequently, the region-aware decoder module is proposed to predict the transformations for different regions respectively. The global geometric transformation from the source point set to the target one is then formed by the weighted fusion of region-aware transformation. Compared to the state-of-the-art approaches, our experiments show that our 3D-URRT achieves superior registration performance over various benchmark datasets (e.g. ModelNet40).
KW - 3D registration
KW - unsupervised registration
UR - http://www.scopus.com/inward/record.url?scp=85180803145&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85180803145&partnerID=8YFLogxK
U2 - 10.1109/ICIP49359.2023.10222489
DO - 10.1109/ICIP49359.2023.10222489
M3 - Conference contribution
AN - SCOPUS:85180803145
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 2780
EP - 2784
BT - 2023 IEEE International Conference on Image Processing, ICIP 2023 - Proceedings
PB - IEEE Computer Society
T2 - 30th IEEE International Conference on Image Processing, ICIP 2023
Y2 - 8 October 2023 through 11 October 2023
ER -