Noa Garcia presents Societal biases in vision and language
On 2022-08-02 11:00:00 at G205, Karlovo náměstí 13, Praha 2
The presence of undesirable biases in computer vision applications is of
increasing concern. The evidence shows that large-scale datasets, and the
models
trained on them, present major imbalances in how different subgroups of the
population are represented. Detecting and addressing these biases, also known
as
societal biases, has become an active research direction in our community. In
this talk, I will analyze gender and racial bias in two fundamental vision and
language tasks: image captioning and visual question answering. The
multimodality in vision and language models means that bias can be encoded in
both images and text, adding an extra layer of difficulty in identifying the
cause of the problem. Thus, from datasets to the final prediction, analyzing
and
quantifying societal bias in every step is crucial to propose solutions for
fairer representations.
Bio: Noa Garcia is an Assistant Professor at the Institute for Datability
Science (IDS) at Osaka University. Previously, she was a postdoctoral
researcher
also at IDS after completing her Ph.D. at Aston University. Her research
interests lay at the intersection of computer vision, natural language
processing, and art analysis.
Noa is visiting VRG and will stay for the whole week. She will be seated at
G10.
increasing concern. The evidence shows that large-scale datasets, and the
models
trained on them, present major imbalances in how different subgroups of the
population are represented. Detecting and addressing these biases, also known
as
societal biases, has become an active research direction in our community. In
this talk, I will analyze gender and racial bias in two fundamental vision and
language tasks: image captioning and visual question answering. The
multimodality in vision and language models means that bias can be encoded in
both images and text, adding an extra layer of difficulty in identifying the
cause of the problem. Thus, from datasets to the final prediction, analyzing
and
quantifying societal bias in every step is crucial to propose solutions for
fairer representations.
Bio: Noa Garcia is an Assistant Professor at the Institute for Datability
Science (IDS) at Osaka University. Previously, she was a postdoctoral
researcher
also at IDS after completing her Ph.D. at Aston University. Her research
interests lay at the intersection of computer vision, natural language
processing, and art analysis.
Noa is visiting VRG and will stay for the whole week. She will be seated at
G10.