Chaim Baskin presents Learning from limited and imperfect data

On 2023-02-07 11:00:00 at G205, Karlovo náměstí 13, Praha 2
The development of massive databases has played a crucial role in the
revolution
of deep neural networks (DNNs) in classification tasks over the past decade.

As the amount of annotated data observed during the training phase of such
models strictly affects the performance of the obtained classifier, a prominent
research direction in the last few years has been to reduce the amount of
labeled data required.

Possible directions include semi-supervised learning, self-supervised learning,
and transfer learning. Despite their potential, the aforementioned methods have
significant limitations. However, an alternative approach is to try to learn
from massive datasets which possess samples with corrupted labels, namely
learning from noisy labels (LNL).

One common approach for LNL exploits a clean auxiliary dataset to meta-learn
how
to correct the corrupted labels or to perform sample reweighting.

In this work, we exploit ideas from ap noise separation and present a
meta-learning two-students one-teacher framework for the LNL problem.

In contrast to prior works on meta-learning for LNL, our framework does not
require a clean auxiliary data set. We design a unique label correction
architecture for the teacher in our framework. Lastly, we describe the learning
process as a bi-level optimization problem and derive multiple efficient
meta-learning schemes for the optimization problem.
Responsible person: Petr Pošík