Events test

Ondrej Kobza presents BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

On 2022-01-25 11:00 at https://feectu.zoom.us/j/98555944426
"BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding" , NAACL-HLT 2019
https://arxiv.org/pdf/1810.04805.pdf

Paper abstract:
We introduce a new language representation model called BERT, which stands for
Bidirectional Encoder Representations from Transformers. Unlike recent language
representation models, BERT is designed to pre-train deep bidirectional
representations from unlabeled text by jointly conditioning on both left and
right context in all layers. As a result, the pre-trained BERT model can be
fine-tuned with just one additional output layer to create state-of-the-art
models for a wide range of tasks, such as question answering and language
inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new
state-of-the-art results on eleven natural language processing tasks, including
pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI
accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering
Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1
(5.1 point absolute improvement).

See the page of reading groups
http://cmp.felk.cvut.cz/~toliageo/rg/index.html
Responsible person: Petr Pošík