Supervised learning on 3D shapes are extensively studied by prior literature,among which PointNet  and its variants PointNet++  are representatives. However,these methods tackle 3D shape learning problems by training from scratch using a fixed learning algorithm over large amounts of labeled data,potentially challenged by data and computation bottlenecks. In the paper,we design a novel model,under the framework of meta-learning,to learn 3D shape representation. By training over multiple 3D tasks,each of which is defined as a supervised learning problem,our method can fast adapt to unseen tasks containing limited labeled data. Specifically,our model consists of a 3Dmeta-learner and a task-oriented 3D-learner,where the 3D-meta-learner produces parameter initialization for the 3D-learner after being trained over different tasks. With adaptively initialized parameters,the 3D-learner can be tuned rapidly in a few steps to achieve good performance on novel tasks with a small amount of training data. To further facilitate discriminative shape feature learning,we introduce a novel task-aware feature adaptation module under a contrastive learning scheme,in which all shapes in each task are considered as a whole and task-oriented compact features are learned. Therefore,we dub our model as 3DMetaConNet. Experiments on three public 3D datasets for few-shot shape classification and segmentation demonstrate that our method can learn compact and discriminative 3D shape features efficiently and robustly in a fast adaptation manner. Our method particularly outperforms the methods without a meta-learning framework and is also superior to existing meta-learning approaches.