Collision Replay: What Does Bumping Into Things Tell You About Scene Geometry?

Alexander Raistrick, Nilesh Kulkarni, David F. Fouhey

Research output: Contribution to conferencePaperpeer-review


What does bumping into things in past scenes tell you about scene geometry in a new scene? In this paper, we investigate the idea of learning from collisions. At the heart of our approach is the idea of collision replay, where after a collision an agent associates the pre-collision observations (such as images or sound collected by the agent) with the time until the next collision. These samples enable training a deep network that can map the pre-collision observations to information about scene geometry. Specifically, we use collision replay to train a model to predict a distribution over collision time from new observations by using supervision from bumps. We learn this distribution conditioned on visual data or echolocation responses. This distribution conveys information about the navigational affordances (e.g., corridors vs open spaces) and, as we show, can be converted into the distance function for the scene geometry. We analyze our approach with a noisily actuated agent in a photorealistic simulator.

Original languageEnglish (US)
StatePublished - 2021
Event32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online
Duration: Nov 22 2021Nov 25 2021


Conference32nd British Machine Vision Conference, BMVC 2021
CityVirtual, Online

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Collision Replay: What Does Bumping Into Things Tell You About Scene Geometry?'. Together they form a unique fingerprint.

Cite this