erdiari commited on
Commit
1b9b3c4
1 Parent(s): f0f86db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -40,8 +40,8 @@ VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from
40
  The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
41
  It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
42
 
43
- VBART-XLarge is an experimental version of original VBART-Large. VBart-XLarge created using copying original weights of VBart-Large and adding extra layers between them and trained with %88 less data.
44
- VBART-XLarge gives improved results compared to VBART-Large albeit being small.
45
 
46
  This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for question-answering and generation tasks described in the [paper](https://doi.org/10.55730/1300-0632.3914).
47
 
 
40
  The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
41
  It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
42
 
43
+ VBART-XLarge is an experimental version of original VBART-Large. VBart-XLarge created using copying original weights of VBart-Large and adding extra layers between original weights. It is exposed to %88 less tokens compared to VBart-Large.
44
+ VBART-XLarge gives improved results compared to VBART-Large albeit in small margins.
45
 
46
  This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for question-answering and generation tasks described in the [paper](https://doi.org/10.55730/1300-0632.3914).
47