Facial expression mapping is the process of attributing signal values to a particular set of muscle activations in the face. This paper proposes the development of a broad lexicon of quantifiable, reproducible facial expressions with known signal values using an expressive 3D model and crowdsourced labeling data. Traditionally, coding muscle movements in the face is a time-consuming manual process performed by specialists. Identifying the communicative content of an expression generally requires generating large sets of posed photographs, with identifying labels chosen from a circumscribed list. Consequently, the widely accepted collection of configurations with known meanings is limited to six basic expressions of emotion. Our approach defines mappings from parameterized facial expressions displayed by a 3D avatar to their semantic representations. By collecting large, free-response label sets from naïve raters and using natural language processing techniques, we converge on a semantic centroid, or single label quickly and with low overhead.