Update README.md
Browse files
README.md
CHANGED
@@ -40,8 +40,8 @@ VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from
|
|
40 |
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
|
41 |
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
|
42 |
|
43 |
-
VBART-XLarge is an experimental version of original VBART-Large. VBart-XLarge created using copying original weights of VBart-Large and adding extra layers between
|
44 |
-
VBART-XLarge gives improved results compared to VBART-Large albeit
|
45 |
|
46 |
This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for question-answering and generation tasks described in the [paper](https://doi.org/10.55730/1300-0632.3914).
|
47 |
|
|
|
40 |
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
|
41 |
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
|
42 |
|
43 |
+
VBART-XLarge is an experimental version of original VBART-Large. VBart-XLarge created using copying original weights of VBart-Large and adding extra layers between original weights. It is exposed to %88 less tokens compared to VBart-Large.
|
44 |
+
VBART-XLarge gives improved results compared to VBART-Large albeit in small margins.
|
45 |
|
46 |
This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for question-answering and generation tasks described in the [paper](https://doi.org/10.55730/1300-0632.3914).
|
47 |
|