erdiari commited on
Commit
139b636
1 Parent(s): 9f194c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -45
README.md CHANGED
@@ -1,47 +1,85 @@
1
  ---
2
- tags:
3
- - generated_from_keras_callback
4
- model-index:
5
- - name: VBART-Large-Title-Generation-from-News
6
- results: []
 
7
  ---
8
-
9
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
- probably proofread and complete it, then remove this comment. -->
11
-
12
- # VBART-Large-Title-Generation-from-News
13
-
14
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
- It achieves the following results on the evaluation set:
16
-
17
-
18
- ## Model description
19
-
20
- More information needed
21
-
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
-
30
- ## Training procedure
31
-
32
- ### Training hyperparameters
33
-
34
- The following hyperparameters were used during training:
35
- - optimizer: None
36
- - training_precision: float32
37
-
38
- ### Training results
39
-
40
-
41
-
42
- ### Framework versions
43
-
44
- - Transformers 4.38.2
45
- - TensorFlow 2.13.1
46
- - Datasets 2.18.0
47
- - Tokenizers 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - tr
4
+ arXiv: 2403.01308
5
+ library_name: transformers
6
+ pipeline_tag: text2text-generation
7
+ license: cc-by-nc-sa-4.0
8
  ---
9
+ # VBART Model Card
10
+
11
+ ## Model Description
12
+
13
+ VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
14
+ The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
15
+ It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
16
+
17
+ This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for title generation from news body task.
18
+
19
+ - **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
20
+ - **Model type:** Transformer encoder-decoder based on mBART architecture
21
+ - **Language(s) (NLP):** Turkish
22
+ - **License:** CC BY-NC-SA 4.0
23
+ - **Finetuned from:** VBART-Large
24
+ - **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
25
+ ## How to Get Started with the Model
26
+ ```python
27
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
28
+
29
+ tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-Large-Title-Generation-from-News",
30
+ model_input_names=['input_ids', 'attention_mask'])
31
+ # Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
32
+ model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-Large-Title-Generation-from-News")#, device_map="auto")
33
+
34
+ input_text="..."
35
+
36
+ token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
37
+ outputs = model.generate(**token_input)
38
+ print(tokenizer.decode(outputs[0]))
39
+ ```
40
+
41
+ ## Training Details
42
+ ### Training Data
43
+ The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
44
+
45
+ The fine-tuning dataset is a mixture of [OpenSubtitles](https://huggingface.co/datasets/open_subtitles), [TED Talks (2013)](https://wit3.fbk.eu/home) and [Tatoeba](https://tatoeba.org/en/) datasets.
46
+
47
+ ### Limitations
48
+ This model is fine-tuned for title generation tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
49
+
50
+ ### Training Procedure
51
+ Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 25 epoch.
52
+ #### Hardware
53
+ - **GPUs**: 8 x Nvidia A100-80 GB
54
+ #### Software
55
+ - TensorFlow
56
+ #### Hyperparameters
57
+ ##### Pretraining
58
+ - **Training regime:** fp16 mixed precision
59
+ - **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
60
+ - **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
61
+ - **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
62
+ - **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165k and 205k steps, respectively)
63
+ - **Initial Learning rate**: 5e-6
64
+ - **Training tokens**: 708B
65
+
66
+ ##### Fine-tuning
67
+ - **Training regime:** fp16 mixed precision
68
+ - **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
69
+ - **Scheduler**: Linear decay scheduler
70
+ - **Dropout**: 0.1
71
+ - **Learning rate**: 5e-5
72
+ - **Fine-tune epochs**: 25
73
+
74
+ #### Metrics
75
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f8b3c84588fe31f435a92b/l8PaGu_OUwWKjHQDIP_X4.png)
76
+
77
+ ## Citation
78
+ ```
79
+ @article{turker2024vbart,
80
+ title={VBART: The Turkish LLM},
81
+ author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
82
+ journal={arXiv preprint arXiv:2403.01308},
83
+ year={2024}
84
+ }
85
+ ```