meliksahturker commited on
Commit
f840527
1 Parent(s): 2c33b77

Upload TFMBartForConditionalGeneration

Browse files
Files changed (4) hide show
  1. README.md +45 -108
  2. config.json +33 -0
  3. generation_config.json +9 -0
  4. tf_model.h5 +3 -0
README.md CHANGED
@@ -1,110 +1,47 @@
1
  ---
2
- datasets:
3
- - mlsum
4
- - batubayk/TR-News
5
- - csebuetnlp/xlsum
6
- - wiki_lingua
7
- language:
8
- - tr
9
- results:
10
- - task:
11
- type: text-summarization
12
- dataset:
13
- name: mlsum
14
- type: mlsum
15
- metrics:
16
- - name: rogue(r1/r2/rl)
17
- type: rouge
18
- value: 45.75/32.71/39.86
19
- - task:
20
- type: text-summarization
21
- dataset:
22
- name: batubayk/TR-News
23
- type: batubayk/TR-News
24
- metrics:
25
- - name: rogue(r1/r2/rl)
26
- type: rouge
27
- value: 41.97/28.26/36.69
28
- - task:
29
- type: text-summarization
30
- dataset:
31
- name: csebuetnlp/xlsum
32
- type: csebuetnlp/xlsum
33
- metrics:
34
- - name: rogue(r1/r2/rl)
35
- type: rouge
36
- value: 34.15/17.94/28.03
37
- arxiv: 2403.01308
38
- library_name: transformers
39
- pipeline_tag: text2text-generation
40
  ---
41
- # VBART Model Card
42
-
43
- ## Model Description
44
-
45
- VBART is the first sequence-to-sequence model trained in Turkish corpora from scratch. It was developed by VNGRS in (Ne zamandı).
46
- This model is capable of text transformation task such as summarization, paraphrasing, title generation with finetuning.
47
-
48
- This model is scores better on many tasks while being much smaller than other implementations.
49
-
50
- This repository contains fine-tuned weights of VBART for summarization task using Turkish sections of [mlsum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News), [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/turkish) and [Wikilingua](https://huggingface.co/datasets/wiki_lingua).
51
-
52
- - **Developed by:** [VNGRS](https://vngrs.com/)
53
- - **Model type:** Transformer encoder-decoder based on mBart
54
- - **Language(s) (NLP):** Turkish
55
- - **License:** [More Information Needed]
56
- - **Finetuned from model:** VBART
57
- - Paper : [arxiv](https://arxiv.org/abs/2403.01308)
58
- ## How to Get Started with the Model
59
- Use the code below to get started with the model.
60
- -> Model yüklendikten sonra bir kod çıkar
61
- [More Information Needed]
62
-
63
- ## Training Details
64
-
65
- ### Training Data
66
- Base model training data is filtered mixed corpus made of Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found in their respective page. Data then filtered using set of heuristics and certain rules, explained in appendix of our [paper](https://arxiv.org/abs/2403.01308).
67
-
68
- Fine-tuning dataset is Turkish sections of [mlsum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News), [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/turkish) and [Wikilingua](https://huggingface.co/datasets/wiki_lingua), as mentioned before.
69
-
70
-
71
- ### Limitations
72
- This model in fine-tuned to question answering and question generation task. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model.
73
-
74
- ### Training Procedure
75
- Pretrained for 30 days, resulted in total training of 23 epochs. TODO: Ne kadar token olduğunu yaz.
76
- #### Hardware
77
- - **GPUs**: 8X Nvidia A100-80 GB
78
- #### Software
79
- - Tensorflow
80
- #### Hyperparameters
81
- ##### Pretraining
82
- - **Training regime:** fp16 mixed precision
83
- - **Training objective** : Sentence permutation and span masking (using mask lengths sampled from poisson distribution $\lambda = 3.5$ and total of %30 data)
84
- - **Optimizer** : Adam optimizer (\(\beta_{1} = 0.9, \beta_{2} = 0.98, \epsilon = 1e-6\))
85
- - **Scheduler**: Linear decay scheduler (20.000 warm up steps)
86
- - **Dropout**: 0.1 (dropped to 0.05 and 0 in last 160k steps)
87
- - **Learning rate**: \( 5e-6 \)
88
- ##### Finetuning
89
- - **Training regime:** fp16 mixed precision
90
- - **Optimizer** : Adam optimizer (\(\beta_{1} = 0.9, \beta_{2} = 0.98, \epsilon = 1e-6\))
91
- - **Scheduler**: Linear decay scheduler
92
- - **Dropout**: 0.1
93
- - **Learning rate**: \(5e-5\)
94
- #### Metrics
95
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f8b3c84588fe31f435a92b/QCef-9yumzG2sHksOGcUs.png)
96
-
97
- ## License
98
-
99
-
100
- ## Citation
101
- ```
102
- @misc{VBART,
103
- title={VBART: The Turkish LLM},
104
- author={Melikşah Türker and Mehmet Erdi Arı and Aydın Han},
105
- year={2024},
106
- eprint={2403.01308},
107
- archivePrefix={arXiv},
108
- primaryClass={cs.CL}
109
- }
110
- ```
 
1
  ---
2
+ tags:
3
+ - generated_from_keras_callback
4
+ model-index:
5
+ - name: VBART-Large-Summarization
6
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
+
9
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
+ probably proofread and complete it, then remove this comment. -->
11
+
12
+ # VBART-Large-Summarization
13
+
14
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - optimizer: None
36
+ - training_precision: float32
37
+
38
+ ### Training results
39
+
40
+
41
+
42
+ ### Framework versions
43
+
44
+ - Transformers 4.38.2
45
+ - TensorFlow 2.13.1
46
+ - Datasets 2.18.0
47
+ - Tokenizers 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "activation_function": "gelu",
4
+ "architectures": [
5
+ "MBartForConditionalGeneration"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 2,
9
+ "classifier_dropout": 0.0,
10
+ "d_model": 1024,
11
+ "decoder_attention_heads": 16,
12
+ "decoder_ffn_dim": 4096,
13
+ "decoder_layerdrop": 0.0,
14
+ "decoder_layers": 12,
15
+ "decoder_start_token_id": 2,
16
+ "dropout": 0.1,
17
+ "encoder_attention_heads": 16,
18
+ "encoder_ffn_dim": 4096,
19
+ "encoder_layerdrop": 0.0,
20
+ "encoder_layers": 12,
21
+ "eos_token_id": 3,
22
+ "forced_eos_token_id": 3,
23
+ "init_std": 0.02,
24
+ "is_encoder_decoder": true,
25
+ "max_position_embeddings": 1024,
26
+ "model_type": "mbart",
27
+ "num_hidden_layers": 12,
28
+ "pad_token_id": 0,
29
+ "scale_embedding": false,
30
+ "transformers_version": "4.38.2",
31
+ "use_cache": true,
32
+ "vocab_size": 32000
33
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 3,
6
+ "forced_eos_token_id": 3,
7
+ "pad_token_id": 0,
8
+ "transformers_version": "4.38.2"
9
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dba67d9e9003c7b8f0ee54bf14b6f27e37c5b05821f2fca7a5e11723afbad22e
3
+ size 1551059288