fdelucaf commited on
Commit
7f73329
1 Parent(s): d94d549

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -42
README.md CHANGED
@@ -1,32 +1,20 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
4
  ## Projecte Aina’s Galician-Catalan machine translation model
5
-
6
- ## Table of Contents
7
- - [Model Description](#model-description)
8
- - [Intended Uses and Limitations](#intended-use)
9
- - [How to Use](#how-to-use)
10
- - [Training](#training)
11
- - [Training data](#training-data)
12
- - [Training procedure](#training-procedure)
13
- - [Data Preparation](#data-preparation)
14
- - [Tokenization](#tokenization)
15
- - [Hyperparameters](#hyperparameters)
16
- - [Evaluation](#evaluation)
17
- - [Variable and Metrics](#variable-and-metrics)
18
- - [Evaluation Results](#evaluation-results)
19
- - [Additional Information](#additional-information)
20
- - [Author](#author)
21
- - [Contact Information](#contact-information)
22
- - [Copyright](#copyright)
23
- - [Licensing Information](#licensing-information)
24
- - [Funding](#funding)
25
- - [Disclaimer](#disclaimer)
26
 
27
  ## Model description
28
 
29
- This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Galician-Catalan datasets totalling 10.017.995 sentence pairs. 4.267.995 sentence pairs were parallel data collected from the web while the remaining 5.750.000 sentence pairs were parallel synthetic data created using the GL-ES translator of [Proxecto Nós](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-es-gl). The model was evaluated on the Flores, TaCon and NTREX evaluation datasets.
 
 
 
30
 
31
  ## Intended uses and limitations
32
 
@@ -46,7 +34,7 @@ Translate a sentence using python
46
  import ctranslate2
47
  import pyonmttok
48
  from huggingface_hub import snapshot_download
49
- model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-gl-ca", revision="main")
50
  tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
51
  tokenized=tokenizer.tokenize("Benvido ao proxecto Ilenia.")
52
  translator = ctranslate2.Translator(model_dir)
@@ -54,6 +42,11 @@ translated = translator.translate_batch([tokenized[0]])
54
  print(tokenizer.detokenize(translated[0][0]['tokens']))
55
  ```
56
 
 
 
 
 
 
57
  ## Training
58
 
59
  ### Training data
@@ -75,18 +68,23 @@ The Galician-Catalan data collected from the web was a combination of the follow
75
  | **Total** | **4.952.275** |
76
 
77
  The datasets were concatenated before filtering to avoid intra-dataset duplicates and the final size was 4.267.995.
78
- The 5.750.000 sentence pairs of synthetic parallel data were created from a random sampling of the [Projecte Aina ES-CA corpus](https://huggingface.co/projecte-aina/mt-aina-ca-es)
 
79
 
80
  ### Training procedure
81
 
82
  ### Data preparation
83
 
84
- All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 10.017.995 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
 
 
 
85
 
86
 
87
  #### Tokenization
88
 
89
- All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
 
90
 
91
  #### Hyperparameters
92
 
@@ -119,33 +117,58 @@ This data was then concatenated with the synthetic parallel data and training co
119
  Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.
120
 
121
  ## Evaluation
 
122
  ### Variable and metrics
123
- We use the BLEU score for evaluation on test sets: [Flores-200](https://github.com/facebookresearch/flores/tree/main/flores200), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/) and [NTREX](https://github.com/MicrosoftTranslator/NTREX)
 
 
 
 
124
  ### Evaluation results
125
- Below are the evaluation results on the machine translation from Galician to Catalan compared to [Google Translate](https://translate.google.com/), [M2M100 1.2B](https://huggingface.co/facebook/m2m100_1.2B), [NLLB 200 3.3B](https://huggingface.co/facebook/nllb-200-3.3B) and [ NLLB-200's distilled 1.3B variant](https://huggingface.co/facebook/nllb-200-distilled-1.3B):
126
- | Test set |Google Translate|M2M100 1.2B| NLLB 1.3B | NLLB 3.3 |mt-aina-gl-ca|
 
 
 
 
127
  |----------------------|----|-------|-----------|------------------|---------------|
128
  |Flores 101 devtest |**36,4**|32,6| 22,3 | 34,3 | 32,4 |
129
  | TaCON |48,4|56,5|32,2 | 54,1 | **58,2** |
130
  | NTREX |**34,7**|34,0|20,4 | 34,2 | 33,7 |
131
  | Average |39,0|41,0| 25,0 | 40,9 | **41,4** |
 
 
132
  ## Additional information
 
133
  ### Author
134
- Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
135
- ### Contact information
136
- For further information, send an email to <langtech@bsc.es>
 
 
137
  ### Copyright
138
- Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)
139
- ### Licensing information
140
- This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
 
 
141
  ### Funding
142
- This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334
143
- ## Limitations and Bias
144
- At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
145
  ### Disclaimer
 
146
  <details>
147
  <summary>Click to expand</summary>
148
- The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
149
- When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
150
- In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
151
- </details>
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - gl
5
+ - ca
6
+ metrics:
7
+ - bleu
8
+ library_name: fairseq
9
  ---
10
  ## Projecte Aina’s Galician-Catalan machine translation model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ## Model description
13
 
14
+ This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Galician-Catalan datasets
15
+ totalling 10.017.995 sentence pairs. 4.267.995 sentence pairs were parallel data collected from the web while the remaining 5.750.000 sentence pairs
16
+ were parallel synthetic data created using the GL-ES translator of [Proxecto Nós](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-es-gl).
17
+ The model was evaluated on the Flores, TaCon and NTREX evaluation datasets.
18
 
19
  ## Intended uses and limitations
20
 
 
34
  import ctranslate2
35
  import pyonmttok
36
  from huggingface_hub import snapshot_download
37
+ model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-gl-ca", revision="main")
38
  tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
39
  tokenized=tokenizer.tokenize("Benvido ao proxecto Ilenia.")
40
  translator = ctranslate2.Translator(model_dir)
 
42
  print(tokenizer.detokenize(translated[0][0]['tokens']))
43
  ```
44
 
45
+
46
+ ## Limitations and bias
47
+ At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
48
+ However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
49
+
50
  ## Training
51
 
52
  ### Training data
 
68
  | **Total** | **4.952.275** |
69
 
70
  The datasets were concatenated before filtering to avoid intra-dataset duplicates and the final size was 4.267.995.
71
+ The 5.750.000 sentence pairs of synthetic parallel data were created from a random sampling
72
+ of the [Projecte Aina ES-CA corpus](https://huggingface.co/projecte-aina/mt-aina-ca-es).
73
 
74
  ### Training procedure
75
 
76
  ### Data preparation
77
 
78
+ All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
79
+ This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
80
+ The filtered datasets are then concatenated to form a final corpus of 10.017.995 and before training the punctuation is normalized using a
81
+ modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
82
 
83
 
84
  #### Tokenization
85
 
86
+ All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data.
87
+ This model is included.
88
 
89
  #### Hyperparameters
90
 
 
117
  Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.
118
 
119
  ## Evaluation
120
+
121
  ### Variable and metrics
122
+
123
+ We use the BLEU score for evaluation on test sets: [Flores-200](https://github.com/facebookresearch/flores/tree/main/flores200),
124
+ [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/)
125
+ and [NTREX](https://github.com/MicrosoftTranslator/NTREX).
126
+
127
  ### Evaluation results
128
+
129
+ Below are the evaluation results on the machine translation from Galician to Catalan compared to [Google Translate](https://translate.google.com/),
130
+ [M2M100 1.2B](https://huggingface.co/facebook/m2m100_1.2B), [NLLB 200 3.3B](https://huggingface.co/facebook/nllb-200-3.3B) and
131
+ [ NLLB-200's distilled 1.3B variant](https://huggingface.co/facebook/nllb-200-distilled-1.3B):
132
+
133
+ | Test set |Google Translate|M2M100 1.2B| NLLB 1.3B | NLLB 3.3 | aina-translator-gl-ca |
134
  |----------------------|----|-------|-----------|------------------|---------------|
135
  |Flores 101 devtest |**36,4**|32,6| 22,3 | 34,3 | 32,4 |
136
  | TaCON |48,4|56,5|32,2 | 54,1 | **58,2** |
137
  | NTREX |**34,7**|34,0|20,4 | 34,2 | 33,7 |
138
  | Average |39,0|41,0| 25,0 | 40,9 | **41,4** |
139
+
140
+
141
  ## Additional information
142
+
143
  ### Author
144
+ The Language Technologies Unit from Barcelona Supercomputing Center.
145
+
146
+ ### Contact
147
+ For further information, please send an email to <langtech@bsc.es>.
148
+
149
  ### Copyright
150
+ Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
151
+
152
+ ### License
153
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
154
+
155
  ### Funding
156
+ This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
157
+
 
158
  ### Disclaimer
159
+
160
  <details>
161
  <summary>Click to expand</summary>
162
+
163
+ The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
164
+
165
+ Be aware that the model may have biases and/or any other undesirable distortions.
166
+
167
+ When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
168
+ or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
169
+ in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
170
+
171
+ In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
172
+ be liable for any results arising from the use made by third parties.
173
+
174
+ </details>