TY - GEN
T1 - Virtual Touch
T2 - 18th IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
AU - Liu, Xixuan Julie
AU - Fang, Yi
N1 - Funding Information:
The authors gratefully acknowledge the financial support from the NYUAD Institute (Research Enhancement Fund - RE132).
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - The Blind or Visually Impaired (BVI) individuals use haptics much more frequently than the healthy-sighted in their everyday lives to locate objects and acquire object details. This consequently puts them at higher risk of contracting the virus through close contact during a pandemic crisis (e.g. COVID-19). Traditional canes only give the BVIs limited perceptive range. Our project develops a wearable solution named Virtual Touch to augment the BVI's perceptive power so they can perceive objects near and far in their surrounding environment in a touch-free manner and consequently carry out activities of daily living during pandemics more intuitively, safely, and independently. The Virtual Touch feature contains a camera with a novel point-based neural network TouchNet tailored for real-time blind-centered object detection, and a headphone telling the BVI the semantic labels. Through finger pointing, the BVI end user indicates where he or she is paying attention to relative to their egocentric coordinate system, based on which we build attention-driven spatial intelligence.
AB - The Blind or Visually Impaired (BVI) individuals use haptics much more frequently than the healthy-sighted in their everyday lives to locate objects and acquire object details. This consequently puts them at higher risk of contracting the virus through close contact during a pandemic crisis (e.g. COVID-19). Traditional canes only give the BVIs limited perceptive range. Our project develops a wearable solution named Virtual Touch to augment the BVI's perceptive power so they can perceive objects near and far in their surrounding environment in a touch-free manner and consequently carry out activities of daily living during pandemics more intuitively, safely, and independently. The Virtual Touch feature contains a camera with a novel point-based neural network TouchNet tailored for real-time blind-centered object detection, and a headphone telling the BVI the semantic labels. Through finger pointing, the BVI end user indicates where he or she is paying attention to relative to their egocentric coordinate system, based on which we build attention-driven spatial intelligence.
UR - http://www.scopus.com/inward/record.url?scp=85123058257&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123058257&partnerID=8YFLogxK
U2 - 10.1109/ICCVW54120.2021.00196
DO - 10.1109/ICCVW54120.2021.00196
M3 - Conference contribution
AN - SCOPUS:85123058257
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 1708
EP - 1717
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 11 October 2021 through 17 October 2021
ER -