Abstract
The integration of text-to-speech (TTS) synthesis and animation of synthetic faces allows new applications like visual human computer interfaces using agents or avatars. The TTS informs the talking head when phonemes are spoken. The appropriate mouth shapes are animated and rendered while the TTS produces the sound. We call this integrated system of TTS and animation a Visual TTS (VTTS). This paper describes the architecture on an integrated VTTS synthesizer that allows defining facial expressions as bookmarks in the text that will be animated while the model is talking. The position of a bookmark in the text defines the start time for the facial expression. The bookmark itself names the expression, its amplitude and the duration during which the amplitude has to be reached by the face. A bookmark to face animation parameter (FAP) converter creates a curve defining the amplitude for the given FAP over time using Hermite functions of 3rd order.
Original language | English (US) |
---|---|
State | Published - 1998 |
Event | 5th International Conference on Spoken Language Processing, ICSLP 1998 - Sydney, Australia Duration: Nov 30 1998 → Dec 4 1998 |
Conference
Conference | 5th International Conference on Spoken Language Processing, ICSLP 1998 |
---|---|
Country/Territory | Australia |
City | Sydney |
Period | 11/30/98 → 12/4/98 |
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language