Radu Horaud presents Exploiting the Complementarity Between Vision and Audio for The Dynamic Analysis of People

On 2018-03-27 16:00:00 at G205, Karlovo náměstí 13, Praha 2
In human-robot interaction one major challenge is the design of methods
enabling a group of people to communicate with a robot. Unlike dyadic
interaction, the robot is faced with the problem of associating temporal
segments of speech with participants, all in the presence of visual
clutter, overlapping speech, and reverberation. For example, the robot
should be able to recognize who is the speaker and who are the listeners,
and to pop into the conversation at the right moment. We address these
problems by fusing audio information, visual information and robot control.
I will overview recent research carried out by the Perception group at
INRIA Grenoble.
Za obsah zodpovídá: Petr Pošík