Bart is one of the first Seq2Seq models in the library, and achieves state of the art results on text generation tasks, like abstractive summarization. Three sets of pretrained weights are released:
bart-large
: the pretrained base modelbart-large-cnn
: the base model finetuned on the CNN/Daily Mail Abstractive Summarization Taskbart-large-mnli
: the base model finetuned on the MNLI classification task.
Related:
Big thanks to the original authors, especially Mike Lewis, Yinhan Liu, Naman Goyal who helped answer our questions!