The SignCom project aims to improve the quality of real-time interaction between humans and virtual agents by exploiting natural communication modalities such as gestures, facial expressions, and gaze direction. Using structured and coded French Sign Language signs, the real and virtual humans are able to converse with each other. The results of this research will be valuable for the creation of "intelligent and expressive" interfaces for people who use signed languages.


Two main applications are considered for the project:

  1. Interactive kiosk: making public announcements accessible for deaf and hard-of-hearing people is recognized as a national and international priority. With the interactive kiosk, users' gestures are captured by cameras and recognized by the system; then, responses are provided through a virtual expressive character, giving information and advice. In this case, the dialog is guided by restrictive scenarios.
  2. Virtual reality: LSF signs, previously recorded with motion capture (mocap), are used to drive a virtual character's animation. Interaction is guided by the progressive construction of a 3D virtual space shared by the human user and the humanoid character.

Participating labs:

Support from:
A.N.R.Images et Reseaux