Text-Driven Mouth Animation for Human-Computer Interaction with Personal Assistant.

Research
Dates
  • Creation: 01/30/2020
  • Update: 07/30/2020
clement_duhartyliess_hatiphd
Personal assistants such as Google Assistant, Alexa, and Bixby are becoming more pervasive in our environments but still do not provide natural interactions. Their lack of realism in terms of expressiveness and their lack of visual feedback can create frustrating experiences.
   
ADMA can give any personal assistant a face and bring it to life. Given a sentence, it generates the corresponding speech and the facial animation that accompany it. The premise of this project has been published at the ICAD 2019 venue in Northumbria.
   
ADMA is not limited to the personal assistant ecosystem. It can further be used for creative purposes such as creating animations for virtual characters, creating deep fakes, previewing scene acting, and much more.