TY - GEN
T1 - More than a feeling
T2 - 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017
AU - Butler, Crystal
AU - Michalowicz, Stephanie
AU - Subramanian, Lakshmi
AU - Burleson, Winslow
N1 - Publisher Copyright:
© 2017 Copyright is held by the owner/author(s).
PY - 2017/10/20
Y1 - 2017/10/20
N2 - Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.
AB - Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.
KW - 3D modeling
KW - Affective computing
KW - Avatars
KW - Facial expression recognition
KW - Natural language processing
KW - Social signal processing
KW - Virtual humans
UR - http://www.scopus.com/inward/record.url?scp=85041550861&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041550861&partnerID=8YFLogxK
U2 - 10.1145/3126594.3126640
DO - 10.1145/3126594.3126640
M3 - Conference contribution
AN - SCOPUS:85041550861
T3 - UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology
SP - 773
EP - 786
BT - UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology
PB - Association for Computing Machinery, Inc
Y2 - 22 October 2017 through 25 October 2017
ER -