TY - JOUR
T1 - Field-aligned online surface reconstruction
AU - Tarini, Marco
AU - Jakob, Wenzel
AU - Kazhdan, Misha
AU - Gumhold, Stefan
AU - Panozzo, Daniele
N1 - Funding Information:
This work was supported by the NSF CAREER award 1652515, NSF award 1422325, MIUR project DSurf, and a fellowship within the FITweltweit program of the German Academic Exchange Service (DAAD). Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. © 2017 ACM. 0730-0301/2017/7-ART77 $15.00 DOI: http://dx.doi.org/10.1145/3072959.3073635
Publisher Copyright:
© 2017 ACM.
PY - 2017
Y1 - 2017
N2 - Today's 3D scanning pipelines can be classified into two overarching categories: ofine, high accuracy methods that rely on global optimization to reconstruct complex scenes with hundreds of millions of samples, and online methods that produce real-time but low-quality output, usually from structure-from-motion or depth sensors. The method proposed in this paper is the first to combine the benefits of both approaches, supporting online reconstruction of scenes with hundreds of millions of samples from high-resolution sensing modalities such as structured light or laser scanners. The key property of our algorithm is that it sidesteps the signed-distance computation of classical reconstruction techniques in favor of direct filtering, parametrization, and mesh and texture extraction. All of these steps can be realized using only weak notions of spatial neighborhoods, which allows for an implementation that scales approximately linearly with the size of each dataset that is integrated into a partial reconstruction. Combined, these algorithmic differences enable a drastically more efficient output-driven interactive scanning and reconstruction workflow, where the user is able to see the final quality field-aligned textured mesh during the entirety of the scanning procedure. Holes or parts with registration problems are displayed in real-time to the user and can be easily resolved by adding further localized scans, or by adjusting the input point cloud using our interactive editing tools with immediate visual feedback on the output mesh. We demonstrate the effectiveness of our algorithm in conjunction with a state-of-the-art structured light scanner and optical tracking system and test it on a large variety of challenging models.
AB - Today's 3D scanning pipelines can be classified into two overarching categories: ofine, high accuracy methods that rely on global optimization to reconstruct complex scenes with hundreds of millions of samples, and online methods that produce real-time but low-quality output, usually from structure-from-motion or depth sensors. The method proposed in this paper is the first to combine the benefits of both approaches, supporting online reconstruction of scenes with hundreds of millions of samples from high-resolution sensing modalities such as structured light or laser scanners. The key property of our algorithm is that it sidesteps the signed-distance computation of classical reconstruction techniques in favor of direct filtering, parametrization, and mesh and texture extraction. All of these steps can be realized using only weak notions of spatial neighborhoods, which allows for an implementation that scales approximately linearly with the size of each dataset that is integrated into a partial reconstruction. Combined, these algorithmic differences enable a drastically more efficient output-driven interactive scanning and reconstruction workflow, where the user is able to see the final quality field-aligned textured mesh during the entirety of the scanning procedure. Holes or parts with registration problems are displayed in real-time to the user and can be easily resolved by adding further localized scans, or by adjusting the input point cloud using our interactive editing tools with immediate visual feedback on the output mesh. We demonstrate the effectiveness of our algorithm in conjunction with a state-of-the-art structured light scanner and optical tracking system and test it on a large variety of challenging models.
KW - Parameterization
KW - Surfacel reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85030773868&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85030773868&partnerID=8YFLogxK
U2 - 10.1145/3072959.3073635
DO - 10.1145/3072959.3073635
M3 - Conference article
AN - SCOPUS:85030773868
SN - 0730-0301
VL - 36
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 4
M1 - 77
T2 - ACM SIGGRAPH 2017
Y2 - 30 July 2017 through 3 August 2017
ER -