Aurele HAINAUT presents Decision Transformer: Reinforcement Learning via Sequence Modeling

On 2022-01-06 11:00:00 at https://feectu.zoom.us/j/98555944426
"Online Decision Transformer: Reinforcement Learning via Sequence Modeling",
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha
Laskin,
Pieter Abbeel, Aravind Srinivas, Igor Mordatch, NeurIPS 2021

Paper url:
https://proceedings.neurips.cc/paper/2021/hash/7f489f642a0ddb10272b5c31057f0663-Abstract.html

Paper abstract: We introduce a framework that abstracts Reinforcement Learning
(RL) as a sequence modeling problem. This allows us to draw upon the simplicity
and scalability of the Transformer architecture, and associated advances in
language modeling such as GPT-x and BERT. In particular, we present Decision
Transformer, an architecture that casts the problem of RL as conditional
sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal
actions by leveraging a causally masked Transformer. By conditioning an
autoregressive model on the desired return (reward), past states, and actions,
our Decision Transformer model can generate future actions that achieve the
desired return. Despite its simplicity, Decision Transformer matches or exceeds
the performance of state-of-the-art model-free offline RL baselines on Atari,
OpenAI Gym, and Key-to-Door tasks.

See the page of reading groups
http://cmp.felk.cvut.cz/~toliageo/rg/index.html
Responsible person: Petr Pošík