TY - GEN
T1 - Realizing mixed-reality environments with tablets for intuitive human-robot collaboration for object manipulation tasks
AU - Frank, Jared A.
AU - Moorhead, Matthew
AU - Kapila, Vikram
N1 - Funding Information:
This work is supported in part by the National Science Foundation awards RET Site EEC-1132482, GK-12 Fellows DGE: 0741714, and DRK-12 DRL: 1417769, and NY Space Grant Consortium grant 76156-10488
Publisher Copyright:
© 2016 IEEE.
PY - 2016/11/15
Y1 - 2016/11/15
N2 - Although gesture-based input and augmented reality (AR) facilitate intuitive human-robot interactions (HRI), prior implementations have relied on research-grade hardware and software. This paper explores using tablets to render mixed-reality visual environments that support human-robot collaboration for object manipulation. A mobile interface is created on a tablet by integrating real-time vision, 3D graphics, touchscreen interaction, and wireless communication. This mobile interface augments a live video of physical objects in a robot's workspace with corresponding virtual objects that can be manipulated by a user to intuitively command the robot to manipulate the physical objects. By generating the mixed-reality environment on an exocentric view provided by the tablet camera, the interface establishes a common frame of reference for the user and the robot to effectively communicate spatial information for object manipulation. After addressing challenges due to limitations in mobile sensing and computation, the interface is evaluated with participants to examine the performance and user experience with the suggested approach.
AB - Although gesture-based input and augmented reality (AR) facilitate intuitive human-robot interactions (HRI), prior implementations have relied on research-grade hardware and software. This paper explores using tablets to render mixed-reality visual environments that support human-robot collaboration for object manipulation. A mobile interface is created on a tablet by integrating real-time vision, 3D graphics, touchscreen interaction, and wireless communication. This mobile interface augments a live video of physical objects in a robot's workspace with corresponding virtual objects that can be manipulated by a user to intuitively command the robot to manipulate the physical objects. By generating the mixed-reality environment on an exocentric view provided by the tablet camera, the interface establishes a common frame of reference for the user and the robot to effectively communicate spatial information for object manipulation. After addressing challenges due to limitations in mobile sensing and computation, the interface is evaluated with participants to examine the performance and user experience with the suggested approach.
UR - http://www.scopus.com/inward/record.url?scp=85002538652&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85002538652&partnerID=8YFLogxK
U2 - 10.1109/ROMAN.2016.7745146
DO - 10.1109/ROMAN.2016.7745146
M3 - Conference contribution
AN - SCOPUS:85002538652
T3 - 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016
SP - 302
EP - 307
BT - 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016
Y2 - 26 August 2016 through 31 August 2016
ER -