File size: 8,784 Bytes
cbe49d6
 
 
b1cf35e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab0c29d
b1cf35e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9dc3467
b1cf35e
51e4e23
b1cf35e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcde9f4
b1cf35e
866d69b
b1cf35e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51e4e23
 
 
 
 
 
 
b1cf35e
 
51e4e23
b1cf35e
51e4e23
b1cf35e
 
 
 
 
b9e7770
d94d549
 
b1cf35e
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
---
license: apache-2.0
---
## Projecte Aina’s Galician-Catalan machine translation model

## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
  - [Training data](#training-data)
  - [Training procedure](#training-procedure)
	- [Data Preparation](#data-preparation)
	- [Tokenization](#tokenization)
	- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
   - [Variable and Metrics](#variable-and-metrics)
   - [Evaluation Results](#evaluation-results)
- [Additional Information](#additional-information)
  - [Author](#author)
  - [Contact Information](#contact-information)
  - [Copyright](#copyright)
  - [Licensing Information](#licensing-information)
  - [Funding](#funding)
  - [Disclaimer](#disclaimer)
 
## Model description

This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Galician-Catalan datasets totalling 10.017.995 sentence pairs. 4.267.995 sentence pairs were parallel data collected from the web while the remaining 5.750.000 sentence pairs were parallel synthetic data created using the GL-ES translator of [Proxecto Nós](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-es-gl). The model was evaluated on the Flores, TaCon and NTREX evaluation datasets. 

## Intended uses and limitations

You can use this model for machine translation from Galician to Catalan.

## How to use

### Usage
Required libraries:

```bash
pip install ctranslate2 pyonmttok
```

Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-gl-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvido ao proxecto Ilenia.")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```

## Training

### Training data

The Galician-Catalan data collected from the web was a combination of the following datasets:

| Dataset       	| Sentences before cleaning	|
|-------------------|----------------|
| CCMatrix  v1  	| 3.041.152  	| 
| XLENT	| 371.377	|
| WikiMatrix  	| 286.446	|
| GNOME	| 18|
| KDE4    	| 147.182 	|
| TED2020 v1    	| 11.041 	|
| OpenSubtitles	| 16.379 |
|Covost 2 | 263.729 |
|Gene-Crawling | 38.320 |
|Memories Projectes Lliures | 794.631 |
| **Total**     	| **4.952.275** |

The datasets were concatenated before filtering to avoid intra-dataset duplicates and the final size was 4.267.995.
The 5.750.000 sentence pairs of synthetic parallel data were created from a random sampling of the [Projecte Aina ES-CA corpus](https://huggingface.co/projecte-aina/mt-aina-ca-es)

### Training procedure

### Data preparation

 All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 10.017.995 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)


#### Tokenization

 All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model  learned from the combination of all filtered training data. This model is included.  

#### Hyperparameters

The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparameters were set on the Fairseq toolkit:

| Hyperparameter                 	| Value                        	|
|------------------------------------|----------------------------------|
| Architecture                   	| transformer_vaswani_wmt_en_de_big |
| Embedding size                 	| 1024                         	|
| Feedforward size               	| 4096                         	|
| Number of heads                	| 16                           	|
| Encoder layers                 	| 24                           	|
| Decoder layers                 	| 6                            	|
| Normalize before attention     	| True                         	|
| --share-decoder-input-output-embed | True                         	|
| --share-all-embeddings         	| True                         	|
| Effective batch size           	| 48.000                       	|
| Optimizer                      	| adam                         	|
| Adam betas                     	| (0.9, 0.980)                 	|
| Clip norm                      	| 0.0                          	|
| Learning rate                  	| 5e-4                         	|
| Lr. schedurer                  	| inverse sqrt                 	|
| Warmup updates                 	| 8000                         	|
| Dropout                        	| 0.1                          	|
| Label smoothing                	| 0.1                          	|

The model was trained for 24.000 updates on the parallel data collected from the web. 
This data was then concatenated with the synthetic parallel data and training continued for a total of 34.000 updates.
Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.

## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on test sets: [Flores-200](https://github.com/facebookresearch/flores/tree/main/flores200), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/) and [NTREX](https://github.com/MicrosoftTranslator/NTREX)
### Evaluation results
Below are the evaluation results on the machine translation from Galician to Catalan compared to [Google Translate](https://translate.google.com/), [M2M100 1.2B](https://huggingface.co/facebook/m2m100_1.2B), [NLLB 200 3.3B](https://huggingface.co/facebook/nllb-200-3.3B) and [ NLLB-200's distilled 1.3B variant](https://huggingface.co/facebook/nllb-200-distilled-1.3B):
| Test set         	|Google Translate|M2M100 1.2B| NLLB 1.3B | NLLB 3.3 |mt-aina-gl-ca|
|----------------------|----|-------|-----------|------------------|---------------|
|Flores 101 devtest   	|**36,4**|32,6| 22,3   	| 34,3   	| 32,4     	|
| TaCON                 |48,4|56,5|32,2  	    | 54,1      	| **58,2**     	|
| NTREX                 |**34,7**|34,0|20,4    	| 34,2     	| 33,7     	|
| Average           	|39,0|41,0| 25,0 	| 40,9     	    | **41,4**      	|
## Additional information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center. 
### Contact information
For further information, send an email to <langtech@bsc.es>
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)
### Licensing information
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334 
## Limitations and Bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
</details>