Facial Expression Cloning
We are developing techniques for cloning facial expressions displayed by one person to face models representing other faces. Our interest in this problem is two-fold. Firstly, in terms of visual speech synthesis, to animate the face of a new person, generally a training corpus specific to that person is required. A more efficient solution would be to reuse existing data. Secondly, in terms of studying behaviour, we might wish to dissociate behaviour (facial expressions) and appearance (identity). A system capable of doing this in real-time could then be used to manipulate faces during live face-to-face interactions such that the influences of apparent appearance versus social expectation can be studied.