TY - JOUR
T1 - Multi-Robot Scene Completion
T2 - 6th Conference on Robot Learning, CoRL 2022
AU - Li, Yiming
AU - Zhang, Juexiao
AU - Ma, Dekun
AU - Wang, Yue
AU - Feng, Chen
N1 - Funding Information:
We thank the anonymous reviewers for their valuable comments in revising this paper. This work was supported by the NSF CPS Program under Grant CMMI-1932187 and CNS-2121391.
Publisher Copyright:
© 2023 Proceedings of Machine Learning Research. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method's effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.
AB - Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method's effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.
KW - Multi-Robot Perception
KW - Representation Learning
KW - Scene Completion
UR - http://www.scopus.com/inward/record.url?scp=85164965811&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164965811&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85164965811
SN - 2640-3498
VL - 205
SP - 2062
EP - 2072
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 14 December 2022 through 18 December 2022
ER -