TY - GEN
T1 - SSCBench
T2 - 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024
AU - Li, Yiming
AU - Li, Sihang
AU - Liu, Xinhao
AU - Gong, Moonjun
AU - Li, Kenan
AU - Chen, Nuo
AU - Wang, Zijun
AU - Li, Zhiheng
AU - Jiang, Tao
AU - Yu, Fisher
AU - Wang, Yue
AU - Zhao, Hang
AU - Yu, Zhiding
AU - Feng, Chen
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Monocular scene understanding is a foundational component of autonomous systems. Within the spectrum of monocular perception topics, one crucial and useful task for holistic 3D scene understanding is semantic scene completion (SSC), which jointly completes semantic information and geometric details from RGB input. However, progress in SSC, particularly in large-scale street views, is hindered by the scarcity of high-quality datasets. To address this issue, we introduce SSCBench, a comprehensive benchmark that integrates scenes from widely used automotive datasets (e.g., KITTI-360, nuScenes, and Waymo). SSCBench follows an established setup and format in the community, facilitating the easy exploration of SSC methods in various street views. We benchmark models using monocular, trinocular, and point cloud input to assess the performance gap resulting from sensor coverage and modality. Moreover, we have unified semantic labels across diverse datasets to simplify cross-domain generalization testing. We commit to including more datasets and SSC models to drive further advancements in this field. Our data and code are available at https://github.com/ai4ce/SSCBench.
AB - Monocular scene understanding is a foundational component of autonomous systems. Within the spectrum of monocular perception topics, one crucial and useful task for holistic 3D scene understanding is semantic scene completion (SSC), which jointly completes semantic information and geometric details from RGB input. However, progress in SSC, particularly in large-scale street views, is hindered by the scarcity of high-quality datasets. To address this issue, we introduce SSCBench, a comprehensive benchmark that integrates scenes from widely used automotive datasets (e.g., KITTI-360, nuScenes, and Waymo). SSCBench follows an established setup and format in the community, facilitating the easy exploration of SSC methods in various street views. We benchmark models using monocular, trinocular, and point cloud input to assess the performance gap resulting from sensor coverage and modality. Moreover, we have unified semantic labels across diverse datasets to simplify cross-domain generalization testing. We commit to including more datasets and SSC models to drive further advancements in this field. Our data and code are available at https://github.com/ai4ce/SSCBench.
UR - http://www.scopus.com/inward/record.url?scp=85214117789&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214117789&partnerID=8YFLogxK
U2 - 10.1109/IROS58592.2024.10802143
DO - 10.1109/IROS58592.2024.10802143
M3 - Conference contribution
AN - SCOPUS:85214117789
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 13333
EP - 13340
BT - 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 October 2024 through 18 October 2024
ER -