Abstract
Collaborative perception learns how to share information among multiple robots to perceive the environment better than individually done. Past research on this has been task-specific, such as detection or segmentation. Yet this leads to different information sharing for different tasks, hindering the large-scale deployment of collaborative perception. We propose the first task-agnostic collaborative perception paradigm that learns a single collaboration module in a self-supervised manner for different downstream tasks. This is done by a novel task termed multi-robot scene completion, where each robot learns to effectively share information for reconstructing a complete scene viewed by all robots. Moreover, we propose a spatiotemporal autoencoder (STAR) that amortizes over time the communication cost by spatial sub-sampling and temporal mixing. Extensive experiments validate our method's effectiveness on scene completion and collaborative perception in autonomous driving scenarios. Our code is available at https://coperception.github.io/star/.
Original language | English (US) |
---|---|
Pages (from-to) | 2062-2072 |
Number of pages | 11 |
Journal | Proceedings of Machine Learning Research |
Volume | 205 |
State | Published - 2023 |
Event | 6th Conference on Robot Learning, CoRL 2022 - Auckland, New Zealand Duration: Dec 14 2022 → Dec 18 2022 |
Keywords
- Multi-Robot Perception
- Representation Learning
- Scene Completion
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability