Amanda Duarte presents Towards a better visual understanding of sign languages

On 2022-10-20 11:00:00 at G205, Karlovo náměstí 13, Praha 2
Signed languages (SL) are complete and natural languages used as the first or
preferred mode of communication by millions of people worldwide.
Although they are rich and complex languages, they, unfortunately, continue to
be marginalized by society and therefore still not fully accounted for in the
design of new technologies.
The recent advances in Artificial Intelligence and Machine Learning (ML) have
the power to enable better accessibility to sign language users and narrow down
the existing communication barrier between the Deaf community and non-sign
language users.
However, the inclusion of those languages in recent ML models is still very
limited and in its earlier stages.
In this talk, we will discuss two of the challenges that have hindered the
progress of visual sign language understanding: the lack of appropriate
training
data and the development of ML methods that account for the complexities of
sign
language.
We will introduce the How2Sign dataset (CVPR 2021), a large-scale collection of
SL videos together with a set of annotations, as well as the novel task of sign
language video retrieval with free-form textual queries (CVPR 2022).
I will also present SPOT-ALIGN, an automatic sign language annotation framework
that incorporates iterative rounds of sign spotting and feature alignment to
expand the scope and scale of available training data, and show that using the
latter, we are able to learn a robust sign video embedding and improve the
performance of both sign recognition and the novel sign language video
retrieval
task.

Short Bio:
Amanda Duarte is a researcher at the Earth Sciences department of the Barcelona
Supercomputing Center (BSC) where she recently started working on applying
Artificial Intelligence and Machine Learning to Earth Sciences problems like
climate variability and change or drought prediction. She received her PhD in
Computer Science from Universitat Politècnica de Catalunya (UPC) with a focus
on computer vision and machine learning models for sign language processing.
She
is also the creator of the How2Sign dataset. Her past research projects span a
wide variety of areas and involve multimodal data collection and annotation,
sign language processing, speech-conditioned image generation, underwater robot
localization and navigation and underwater image restoration.

webpage: http://amandaduarte.com.br/
Responsible person: Petr Pošík