TY - GEN
T1 - Unsupervised learning of 3D model reconstruction from hand-drawn sketches
AU - Wang, Lingjing
AU - Wang, Jifei
AU - Qian, Cheng
AU - Fang, Yi
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/10/15
Y1 - 2018/10/15
N2 - 3D objects modeling has gained considerable attention in the visual computing community. We propose a low-cost unsupervised learning model for 3D objects reconstruction from hand-drawn sketches. Recent advancements in deep learning opened new opportunities to learn high-quality 3D objects from 2D sketches via supervised networks. However, the limited availability of labeled 2D hand-drawn sketches data (i.e. sketches and its corresponding 3D ground truth models) hinders the training process of supervised methods. In this paper, driven by a novel design of combination of retrieval and reconstruction process, we developed a learning paradigm to reconstruct 3D objects from hand-drawn sketches, without the use of well-labeled hand-drawn sketch data during the entire training process. Specifically, the paradigm begins with the training of an adaption network via autoencoder with adversarial loss, embedding the unpaired 2D rendered image domain with the hand-drawn sketch domain to a shared latent vector space. Then from the embedding latent space, for each testing sketch image, we retrieve a few (e.g. five) nearest neighbors from the training 3D data set as prior knowledge for a 3D Generative Adversarial Network. Our experiments verify our network's robust and superior performance in handling 3D volumetric object generation from single hand-drawn sketch without requiring any 3D ground truth labels.
AB - 3D objects modeling has gained considerable attention in the visual computing community. We propose a low-cost unsupervised learning model for 3D objects reconstruction from hand-drawn sketches. Recent advancements in deep learning opened new opportunities to learn high-quality 3D objects from 2D sketches via supervised networks. However, the limited availability of labeled 2D hand-drawn sketches data (i.e. sketches and its corresponding 3D ground truth models) hinders the training process of supervised methods. In this paper, driven by a novel design of combination of retrieval and reconstruction process, we developed a learning paradigm to reconstruct 3D objects from hand-drawn sketches, without the use of well-labeled hand-drawn sketch data during the entire training process. Specifically, the paradigm begins with the training of an adaption network via autoencoder with adversarial loss, embedding the unpaired 2D rendered image domain with the hand-drawn sketch domain to a shared latent vector space. Then from the embedding latent space, for each testing sketch image, we retrieve a few (e.g. five) nearest neighbors from the training 3D data set as prior knowledge for a 3D Generative Adversarial Network. Our experiments verify our network's robust and superior performance in handling 3D volumetric object generation from single hand-drawn sketch without requiring any 3D ground truth labels.
KW - Generative Model
KW - Sketch Modeling
KW - Unsupervised Learning
UR - http://www.scopus.com/inward/record.url?scp=85058219421&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85058219421&partnerID=8YFLogxK
U2 - 10.1145/3240508.3240699
DO - 10.1145/3240508.3240699
M3 - Conference contribution
AN - SCOPUS:85058219421
T3 - MM 2018 - Proceedings of the 2018 ACM Multimedia Conference
SP - 1820
EP - 1828
BT - MM 2018 - Proceedings of the 2018 ACM Multimedia Conference
PB - Association for Computing Machinery, Inc
T2 - 26th ACM Multimedia conference, MM 2018
Y2 - 22 October 2018 through 26 October 2018
ER -