On a variety of 3D vision tasks, deep neural networks (DNNs) for point clouds have outperformed the conventional non-learning-based methods. However, generalization to out-of-distribution 3D point clouds remains challenging for DNNs. As annotating large-scale point clouds is prohibitively expensive or even impossible, strategies for generalizing DNN models to unseen domains of point clouds without access to those domains during training are urgently needed but have yet to be substantially investigated. In this paper, we design an adversarial learning scheme to learn point cloud representation on a seen source domain and then generalize the learned knowledge to an unseen target domain. Specifically, we unify several geometric transformations into a manifold-based framework under which a distance between transformations is well-defined. Measured by the distance, adversarial samples are mined to form intermediate domains and retained in an adaptive replay-based memory. We further provide theoretical justification for the intermediate domains to reduce the generalization error of the DNN models. Experimental results on synthetic-to-real datasets illustrate that our method outperforms existing 3D deep learning models for domain generalization.