An assistive low-vision platform that augments spatial cognition through proprioceptive guidance: Point-to-Tell-and-Touch

Wenjun Gui, Bingyu Li, Shuaihang Yuan, John Ross Rizzo, Lakshay Sharma, Chen Feng, Anthony Tzes, Yi Fang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Spatial cognition, as gained through the sense of vision, is one of the most important capabilities of human beings. However, for the visually impaired (VI), lack of this perceptual capability poses great challenges in their life. Therefore, we have designed Point-to-Tell-and-Touch, a wearable system with an ergonomic human-machine interface, for assisting the VI with active environmental exploration, with a particular focus on spatial intelligence and navigation to objects of interest in an alien environment. Our key idea is to link visual signals, as decoded synthetically, to the VI's proprioception for more intelligible guidance, in addition to vision-to-audio assistance, i.e., finger pose, as indicated by pointing, is used as 'proprioceptive laser pointer' to target an object in that line of sight. The whole system consists of two features, Point-to-Tell and Point-to-Touch, both of which can work independently or cooperatively. The Point-to-Tell feature contains a camera with a novel one-stage neural network tailored for blind-centered object detection and recognition, and a headphone telling the VI the semantic label and distance from the pointed object. the Point-to-Touch, the second feature, leverages a vibrating wrist band to create a haptic feedback tool that supplements the initial vectorial guidance provided by the first stage (hand pose being direction and the distance being the extent, offered through audio cues). Both platform features utilize proprioception or joint position sense. Through hand pose, the VI end user knows where he or she is pointing relative to their egocentric coordinate system and we are able to use this foundation to build spatial intelligence. Our successful indoor experiments demonstrate the proposed system to be effective and reliable in helping the VI gain spatial cognition and explore the world in a more intuitive way.

Original languageEnglish (US)
Title of host publication2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3817-3822
Number of pages6
ISBN (Electronic)9781728140049
DOIs
StatePublished - Nov 2019
Event2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 - Macau, China
Duration: Nov 3 2019Nov 8 2019

Publication series

NameIEEE International Conference on Intelligent Robots and Systems
ISSN (Print)2153-0858
ISSN (Electronic)2153-0866

Conference

Conference2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
CountryChina
CityMacau
Period11/3/1911/8/19

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint Dive into the research topics of 'An assistive low-vision platform that augments spatial cognition through proprioceptive guidance: Point-to-Tell-and-Touch'. Together they form a unique fingerprint.

  • Cite this

    Gui, W., Li, B., Yuan, S., Rizzo, J. R., Sharma, L., Feng, C., Tzes, A., & Fang, Y. (2019). An assistive low-vision platform that augments spatial cognition through proprioceptive guidance: Point-to-Tell-and-Touch. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019 (pp. 3817-3822). [8967647] (IEEE International Conference on Intelligent Robots and Systems). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IROS40897.2019.8967647