TY - JOUR
T1 - Understanding Human Hands in Contact at Internet Scale
AU - Shan, Dandan
AU - Geng, Jiaqi
AU - Shu, Michelle
AU - Fouhey, David F.
N1 - Funding Information:
Acknowledgments: This work was supported by: the Advanced Machine Learning Collaborative Grant from Procter & Gamble in collaboration with Matthew Barker, PhD; and a gift from Nokia Solutions and Networks Oy.
Publisher Copyright:
© 2020 IEEE.
PY - 2020
Y1 - 2020
N2 - Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands.
AB - Hands are the central means by which humans manipulate their world and being able to reliably extract hand state information from Internet videos of humans engaged in their hands has the potential to pave the way to systems that can learn from petabytes of video data. This paper proposes steps towards this by inferring a rich representation of hands engaged in interaction method that includes: hand location, side, contact state, and a box around the object in contact. To support this effort, we gather a large-scale dataset of hands in contact with objects consisting of 131 days of footage as well as a 100K annotated hand-contact video frame dataset. The learned model on this dataset can serve as a foundation for hand-contact understanding in videos. We quantitatively evaluate it both on its own and in service of predicting and learning from 3D meshes of human hands.
UR - http://www.scopus.com/inward/record.url?scp=85093081668&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093081668&partnerID=8YFLogxK
U2 - 10.1109/CVPR42600.2020.00989
DO - 10.1109/CVPR42600.2020.00989
M3 - Conference article
AN - SCOPUS:85093081668
SN - 1063-6919
SP - 9866
EP - 9875
JO - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
JF - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
M1 - 9157473
T2 - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
Y2 - 14 June 2020 through 19 June 2020
ER -