File size: 4,443 Bytes
15e321f
73ff80c
 
2783017
 
71f3d12
73ff80c
 
 
 
15e321f
73ff80c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language:
- tr
inference:
  parameters:
    max_new_tokens: 128
arXiv: 2403.01308
library_name: transformers
pipeline_tag: text2text-generation
license: cc-by-nc-sa-4.0
---
# VBART Model Card

## Model Description  

VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.  
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.

VBART-XLarge is created by adding extra Transformer layers between the layers of VBART-Large. Hence it was able to transfer learned weights from the smaller model while doublings its number of layers.
VBART-XLarge improves the results compared to VBART-Large albeit in small margins.


This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for text summarization task.

- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
- **Model type:**  Transformer encoder-decoder based on mBART architecture
- **Language(s) (NLP):** Turkish
- **License:** CC BY-NC-SA 4.0
- **Finetuned from:** VBART-XLarge
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
## How to Get Started with the Model  
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-XLarge-Summarization",
                            model_input_names=['input_ids', 'attention_mask'])
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-XLarge-Summarization")#, device_map="auto")

input_text="..."

token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
outputs = model.generate(**token_input)
print(tokenizer.decode(outputs[0]))
```
  
## Training Details  
### Training Data  
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).

The fine-tuning dataset is the Turkish sections of [MLSum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News), [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) and [Wikilingua](https://huggingface.co/datasets/wiki_lingua) datasets.

### Limitations
This model is fine-tuned for paraphrasing tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.

### Training Procedure  
Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 20 epoch.
#### Hardware
- **GPUs**: 8 x Nvidia A100-80 GB
#### Software
- TensorFlow
#### Hyperparameters  
##### Pretraining
- **Training regime:** fp16 mixed precision
- **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
- **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165k and 205k steps, respectively)
- **Initial Learning rate**: 5e-6
- **Training tokens**: 708B

##### Fine-tuning
- **Training regime:** fp16 mixed precision
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Linear decay scheduler
- **Dropout**: 0.1 
-  **Learning rate**: 1e-5
-  **Fine-tune epochs**: 20

#### Metrics
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f8b3c84588fe31f435a92b/RY1gfk_XVhMeWKI1-GuCi.png)

## Citation  
```
@article{turker2024vbart,
  title={VBART: The Turkish LLM},
  author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
  journal={arXiv preprint arXiv:2403.01308},
  year={2024}
}
```