lucio commited on
Commit
bea1a8e
1 Parent(s): 2eba3a7

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -78
README.md CHANGED
@@ -1,66 +1,38 @@
1
  ---
2
- language:
3
- - ug
4
  license: apache-2.0
5
  tags:
6
- - automatic-speech-recognition
7
- - mozilla-foundation/common_voice_8_0
8
  - generated_from_trainer
9
- - ug
10
- - robust-speech-event
11
  datasets:
12
- - mozilla-foundation/common_voice_8_0
13
  model-index:
14
- - name: XLS-R-300M Uyghur CV8
15
- results:
16
- - task:
17
- name: Automatic Speech Recognition
18
- type: automatic-speech-recognition
19
- dataset:
20
- name: Common Voice 8
21
- type: mozilla-foundation/common_voice_8_0
22
- args: ug
23
- metrics:
24
- - name: Test WER
25
- type: wer
26
- value: 28.74
27
- - name: Test CER
28
- type: cer
29
- value: 5.38
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
  should probably proofread and complete it, then remove this comment. -->
34
 
35
- # XLS-R-300M Uyghur CV8
36
 
37
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UG dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 0.2036
40
- - WER: 0.2977
41
 
42
  ## Model description
43
 
44
- For a description of the model architecture, see [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
45
-
46
- The model vocabulary consists of the alphabetic characters of the [Perso-Arabic script for the Uyghur language](https://omniglot.com/writing/uyghur.htm), with punctuation removed.
47
 
48
  ## Intended uses & limitations
49
 
50
- This model is expected to be of some utility for low-fidelity use cases such as:
51
- - Draft video captions
52
- - Indexing of recorded broadcasts
53
-
54
- The model is not reliable enough to use as a substitute for live captions for accessibility purposes, and it should not be used in a manner that would infringe the privacy of any of the contributors to the Common Voice dataset nor any other speakers.
55
 
56
  ## Training and evaluation data
57
 
58
- The combination of `train` and `dev` of common voice official splits were used as training data. The official `test` split was used as validation data as well as for final evaluation.
59
 
60
  ## Training procedure
61
 
62
- The featurization layers of the XLS-R model are frozen while tuning a final CTC/LM layer on the Uyghur CV8 example sentences. A ramped learning rate is used with an initial warmup phase of 2000 steps, a max of 0.0001, and cooling back towards 0 for the remainder of the 18500 steps (100 epochs).
63
-
64
  ### Training hyperparameters
65
 
66
  The following hyperparameters were used during training:
@@ -80,48 +52,31 @@ The following hyperparameters were used during training:
80
 
81
  | Training Loss | Epoch | Step | Validation Loss | Wer |
82
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
83
- | 3.2892 | 2.66 | 500 | 3.2415 | 1.0 |
84
- | 2.9206 | 5.32 | 1000 | 2.4381 | 1.0056 |
85
- | 1.4909 | 7.97 | 1500 | 0.5428 | 0.6705 |
86
- | 1.3395 | 10.64 | 2000 | 0.4207 | 0.5995 |
87
- | 1.2718 | 13.3 | 2500 | 0.3743 | 0.5648 |
88
- | 1.1798 | 15.95 | 3000 | 0.3225 | 0.4927 |
89
- | 1.1392 | 18.61 | 3500 | 0.3097 | 0.4627 |
90
- | 1.1143 | 21.28 | 4000 | 0.2996 | 0.4505 |
91
- | 1.0923 | 23.93 | 4500 | 0.2841 | 0.4229 |
92
- | 1.0516 | 26.59 | 5000 | 0.2705 | 0.4113 |
93
- | 1.051 | 29.25 | 5500 | 0.2622 | 0.4078 |
94
- | 1.021 | 31.91 | 6000 | 0.2611 | 0.4009 |
95
- | 0.9886 | 34.57 | 6500 | 0.2498 | 0.3921 |
96
- | 0.984 | 37.23 | 7000 | 0.2521 | 0.3845 |
97
- | 0.9631 | 39.89 | 7500 | 0.2413 | 0.3791 |
98
- | 0.9353 | 42.55 | 8000 | 0.2391 | 0.3612 |
99
- | 0.922 | 45.21 | 8500 | 0.2363 | 0.3571 |
100
- | 0.9116 | 47.87 | 9000 | 0.2285 | 0.3668 |
101
- | 0.8951 | 50.53 | 9500 | 0.2256 | 0.3729 |
102
- | 0.8865 | 53.19 | 10000 | 0.2228 | 0.3663 |
103
- | 0.8792 | 55.85 | 10500 | 0.2221 | 0.3656 |
104
- | 0.8682 | 58.51 | 11000 | 0.2228 | 0.3323 |
105
- | 0.8492 | 61.17 | 11500 | 0.2167 | 0.3446 |
106
- | 0.8365 | 63.83 | 12000 | 0.2156 | 0.3321 |
107
- | 0.8298 | 66.49 | 12500 | 0.2142 | 0.3400 |
108
- | 0.808 | 69.15 | 13000 | 0.2079 | 0.3148 |
109
- | 0.7999 | 71.81 | 13500 | 0.2117 | 0.3225 |
110
- | 0.7871 | 74.47 | 14000 | 0.2088 | 0.3174 |
111
- | 0.7858 | 77.13 | 14500 | 0.2060 | 0.3008 |
112
- | 0.7764 | 79.78 | 15000 | 0.2128 | 0.3146 |
113
- | 0.7684 | 82.45 | 15500 | 0.2086 | 0.3101 |
114
- | 0.7717 | 85.11 | 16000 | 0.2048 | 0.3069 |
115
- | 0.7435 | 87.76 | 16500 | 0.2027 | 0.3055 |
116
- | 0.7378 | 90.42 | 17000 | 0.2059 | 0.2993 |
117
- | 0.7406 | 93.08 | 17500 | 0.2040 | 0.2966 |
118
- | 0.7361 | 95.74 | 18000 | 0.2056 | 0.3000 |
119
- | 0.7379 | 98.4 | 18500 | 0.2031 | 0.2976 |
120
 
121
 
122
  ### Framework versions
123
 
124
- - Transformers 4.16.0.dev0
125
- - Pytorch 1.10.1+cu102
126
- - Datasets 1.18.2.dev0
127
  - Tokenizers 0.11.0
 
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
 
 
4
  - generated_from_trainer
 
 
5
  datasets:
6
+ - common_voice
7
  model-index:
8
+ - name: xls-r-uyghur-cv8
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # xls-r-uyghur-cv8
16
 
17
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.2163
20
+ - Wer: 0.3241
21
 
22
  ## Model description
23
 
24
+ More information needed
 
 
25
 
26
  ## Intended uses & limitations
27
 
28
+ More information needed
 
 
 
 
29
 
30
  ## Training and evaluation data
31
 
32
+ More information needed
33
 
34
  ## Training procedure
35
 
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
55
+ | 3.2914 | 4.85 | 500 | 3.2283 | 1.0 |
56
+ | 3.0068 | 9.71 | 1000 | 2.7939 | 0.9980 |
57
+ | 1.4306 | 14.56 | 1500 | 0.4857 | 0.6314 |
58
+ | 1.2831 | 19.42 | 2000 | 0.3679 | 0.6066 |
59
+ | 1.2065 | 24.27 | 2500 | 0.3303 | 0.5560 |
60
+ | 1.1449 | 29.13 | 3000 | 0.3008 | 0.4690 |
61
+ | 1.0926 | 33.98 | 3500 | 0.2817 | 0.4619 |
62
+ | 1.0635 | 38.83 | 4000 | 0.2665 | 0.4391 |
63
+ | 1.029 | 43.69 | 4500 | 0.2616 | 0.4175 |
64
+ | 1.0064 | 48.54 | 5000 | 0.2468 | 0.4051 |
65
+ | 0.9659 | 53.4 | 5500 | 0.2394 | 0.3860 |
66
+ | 0.9254 | 58.25 | 6000 | 0.2373 | 0.3689 |
67
+ | 0.9209 | 63.11 | 6500 | 0.2347 | 0.3670 |
68
+ | 0.889 | 67.96 | 7000 | 0.2291 | 0.3687 |
69
+ | 0.8859 | 72.82 | 7500 | 0.2272 | 0.3616 |
70
+ | 0.8441 | 77.67 | 8000 | 0.2232 | 0.3538 |
71
+ | 0.8284 | 82.52 | 8500 | 0.2224 | 0.3382 |
72
+ | 0.8142 | 87.38 | 9000 | 0.2193 | 0.3310 |
73
+ | 0.8012 | 92.23 | 9500 | 0.2168 | 0.3276 |
74
+ | 0.7781 | 97.09 | 10000 | 0.2163 | 0.3241 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
 
77
  ### Framework versions
78
 
79
+ - Transformers 4.17.0.dev0
80
+ - Pytorch 1.10.2+cu102
81
+ - Datasets 1.18.3
82
  - Tokenizers 0.11.0