TY - GEN
T1 - An assistive low-vision platform that augments spatial cognition through proprioceptive guidance
T2 - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
AU - Gui, Wenjun
AU - Li, Bingyu
AU - Yuan, Shuaihang
AU - Rizzo, John Ross
AU - Sharma, Lakshay
AU - Feng, Chen
AU - Tzes, Anthony
AU - Fang, Yi
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Spatial cognition, as gained through the sense of vision, is one of the most important capabilities of human beings. However, for the visually impaired (VI), lack of this perceptual capability poses great challenges in their life. Therefore, we have designed Point-to-Tell-and-Touch, a wearable system with an ergonomic human-machine interface, for assisting the VI with active environmental exploration, with a particular focus on spatial intelligence and navigation to objects of interest in an alien environment. Our key idea is to link visual signals, as decoded synthetically, to the VI's proprioception for more intelligible guidance, in addition to vision-to-audio assistance, i.e., finger pose, as indicated by pointing, is used as 'proprioceptive laser pointer' to target an object in that line of sight. The whole system consists of two features, Point-to-Tell and Point-to-Touch, both of which can work independently or cooperatively. The Point-to-Tell feature contains a camera with a novel one-stage neural network tailored for blind-centered object detection and recognition, and a headphone telling the VI the semantic label and distance from the pointed object. the Point-to-Touch, the second feature, leverages a vibrating wrist band to create a haptic feedback tool that supplements the initial vectorial guidance provided by the first stage (hand pose being direction and the distance being the extent, offered through audio cues). Both platform features utilize proprioception or joint position sense. Through hand pose, the VI end user knows where he or she is pointing relative to their egocentric coordinate system and we are able to use this foundation to build spatial intelligence. Our successful indoor experiments demonstrate the proposed system to be effective and reliable in helping the VI gain spatial cognition and explore the world in a more intuitive way.
AB - Spatial cognition, as gained through the sense of vision, is one of the most important capabilities of human beings. However, for the visually impaired (VI), lack of this perceptual capability poses great challenges in their life. Therefore, we have designed Point-to-Tell-and-Touch, a wearable system with an ergonomic human-machine interface, for assisting the VI with active environmental exploration, with a particular focus on spatial intelligence and navigation to objects of interest in an alien environment. Our key idea is to link visual signals, as decoded synthetically, to the VI's proprioception for more intelligible guidance, in addition to vision-to-audio assistance, i.e., finger pose, as indicated by pointing, is used as 'proprioceptive laser pointer' to target an object in that line of sight. The whole system consists of two features, Point-to-Tell and Point-to-Touch, both of which can work independently or cooperatively. The Point-to-Tell feature contains a camera with a novel one-stage neural network tailored for blind-centered object detection and recognition, and a headphone telling the VI the semantic label and distance from the pointed object. the Point-to-Touch, the second feature, leverages a vibrating wrist band to create a haptic feedback tool that supplements the initial vectorial guidance provided by the first stage (hand pose being direction and the distance being the extent, offered through audio cues). Both platform features utilize proprioception or joint position sense. Through hand pose, the VI end user knows where he or she is pointing relative to their egocentric coordinate system and we are able to use this foundation to build spatial intelligence. Our successful indoor experiments demonstrate the proposed system to be effective and reliable in helping the VI gain spatial cognition and explore the world in a more intuitive way.
UR - http://www.scopus.com/inward/record.url?scp=85081162504&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081162504&partnerID=8YFLogxK
U2 - 10.1109/IROS40897.2019.8967647
DO - 10.1109/IROS40897.2019.8967647
M3 - Conference contribution
AN - SCOPUS:85081162504
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 3817
EP - 3822
BT - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 3 November 2019 through 8 November 2019
ER -