Abstract
In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human's head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.
Original language | English (US) |
---|---|
Article number | 8626140 |
Pages (from-to) | 1343-1350 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 4 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2019 |
ASJC Scopus subject areas
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence