The work presented in this thesis addresses the problem of generating audio-visual expressive performances for virtual actors. A virtual actor is represented by a 3D talking head and an audio-visual performance refers to facial expressions, head movements, gaze direction and the speech signal. While an important amount of work has been dedicated to emotions, we explore here expressive verbal behaviors that signal mental states, i.e "how speakers feel about what they say". We explore the characteristics of these so-called dramatic attitudes and the way they are encoded with speaker-specific prosodic signatures i.e. mental state-specific patterns of trajectories of audio-visual prosodic parameters.