Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes

Sean Segal, Nishanth Kumar, Sergio Casas, Wenyuan Zeng, Mengye Ren, Jingkang Wang, Raquel Urtasun

Research output: Contribution to journalConference articlepeer-review


Self-driving vehicles must perceive and predict the future positions of nearby actors to avoid collisions and drive safely. A deep learning module is often responsible for this task, requiring large-scale, high-quality training datasets. Due to high labeling costs, active learning approaches are an appealing solution to maximizing model performance for a given labeling budget. However, despite its appeal, there has been little scientific analysis of active learning approaches for the perception and prediction (P&P) problem. In this work, we study active learning techniques for P&P and find that the traditional active learning formulation is ill-suited. We thus introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes. Extensive experiments on a real-world dataset suggest significant improvements across perception, prediction, and downstream planning tasks.

Original languageEnglish (US)
Pages (from-to)816-826
Number of pages11
JournalProceedings of Machine Learning Research
StatePublished - 2021
Event5th Conference on Robot Learning, CoRL 2021 - London, United Kingdom
Duration: Nov 8 2021Nov 11 2021

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes'. Together they form a unique fingerprint.

Cite this