TY - GEN
T1 - Manifold Adversarial Learning for Cross-domain 3D Shape Representation
AU - Huang, Hao
AU - Chen, Cheng
AU - Fang, Yi
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - On a variety of 3D vision tasks, deep neural networks (DNNs) for point clouds have outperformed the conventional non-learning-based methods. However, generalization to out-of-distribution 3D point clouds remains challenging for DNNs. As annotating large-scale point clouds is prohibitively expensive or even impossible, strategies for generalizing DNN models to unseen domains of point clouds without access to those domains during training are urgently needed but have yet to be substantially investigated. In this paper, we design an adversarial learning scheme to learn point cloud representation on a seen source domain and then generalize the learned knowledge to an unseen target domain. Specifically, we unify several geometric transformations into a manifold-based framework under which a distance between transformations is well-defined. Measured by the distance, adversarial samples are mined to form intermediate domains and retained in an adaptive replay-based memory. We further provide theoretical justification for the intermediate domains to reduce the generalization error of the DNN models. Experimental results on synthetic-to-real datasets illustrate that our method outperforms existing 3D deep learning models for domain generalization.
AB - On a variety of 3D vision tasks, deep neural networks (DNNs) for point clouds have outperformed the conventional non-learning-based methods. However, generalization to out-of-distribution 3D point clouds remains challenging for DNNs. As annotating large-scale point clouds is prohibitively expensive or even impossible, strategies for generalizing DNN models to unseen domains of point clouds without access to those domains during training are urgently needed but have yet to be substantially investigated. In this paper, we design an adversarial learning scheme to learn point cloud representation on a seen source domain and then generalize the learned knowledge to an unseen target domain. Specifically, we unify several geometric transformations into a manifold-based framework under which a distance between transformations is well-defined. Measured by the distance, adversarial samples are mined to form intermediate domains and retained in an adaptive replay-based memory. We further provide theoretical justification for the intermediate domains to reduce the generalization error of the DNN models. Experimental results on synthetic-to-real datasets illustrate that our method outperforms existing 3D deep learning models for domain generalization.
KW - 3D point cloud
KW - Adversarial learning
KW - Domain generalization
KW - Manifold and memory
UR - http://www.scopus.com/inward/record.url?scp=85142698796&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142698796&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-19809-0_16
DO - 10.1007/978-3-031-19809-0_16
M3 - Conference contribution
AN - SCOPUS:85142698796
SN - 9783031198083
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 272
EP - 289
BT - Computer Vision – ECCV 2022 - 17th European Conference, 2022, Proceedings
A2 - Avidan, Shai
A2 - Brostow, Gabriel
A2 - Cissé, Moustapha
A2 - Farinella, Giovanni Maria
A2 - Hassner, Tal
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th European Conference on Computer Vision, ECCV 2022
Y2 - 23 October 2022 through 27 October 2022
ER -