Teymur Azayev presents Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

On 2019-01-17 10:00:00 at G205, Karlovo náměstí 13, Praha 2
Reading group on the work by C. Finn, P. Abbeel and S. Levine published in ICML
2017. Presented by Teymur Azayev

Paper abstract: We propose an algorithm for meta-learning that is
model-agnostic, in the sense that it is compatible with any model trained with
gradient descent and applicable to a variety of different learning problems,
including classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two fewshot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies

Paper URL: https://arxiv.org/pdf/1703.03400.pdf

Reading group participants: Preferably, send questions about the unclear parts
to the speaker at least one day in advance.

See the page of reading groups
http://cmp.felk.cvut.cz/~toliageo/rg/index.html
Za obsah zodpovídá: Petr Pošík