theainerd commited on
Commit
cba2ef9
1 Parent(s): 42ac8c5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ language: hi
3
+ datasets:
4
+ - Interspeech 2021 : [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
5
+
6
+ metrics:
7
+ - wer
8
+ tags:
9
+ - audio
10
+ - automatic-speech-recognition
11
+ - speech
12
+ - xlsr-fine-tuning-week
13
+ license: apache-2.0
14
+ model-index:
15
+ - name: Hindi XLSR Wav2Vec2 Large 53
16
+
17
+ results:
18
+ - task:
19
+ name: Speech Recognition
20
+ type: automatic-speech-recognition
21
+ dataset:
22
+ name: interspeech 2021.
23
+ type: interspeech
24
+ args: hi
25
+ metrics:
26
+ - name: Test WER
27
+ type: wer
28
+ value: 72.62
29
+ ---
30
+
31
+ # Wav2Vec2-Large-XLSR-53-hindi
32
+
33
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) hindi using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
34
+ When using this model, make sure that your speech input is sampled at 16kHz.
35
+
36
+ ## Usage
37
+
38
+ The model can be used directly (without a language model) as follows:
39
+
40
+ ```python
41
+ import torch
42
+ import torchaudio
43
+ from datasets import load_dataset
44
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
45
+
46
+ test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
47
+
48
+ processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
49
+ model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
50
+
51
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
52
+
53
+ # Preprocessing the datasets.
54
+ # We need to read the aduio files as arrays
55
+ def speech_file_to_array_fn(batch):
56
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
57
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
58
+ return batch
59
+
60
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
61
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
62
+
63
+ with torch.no_grad():
64
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
65
+
66
+ predicted_ids = torch.argmax(logits, dim=-1)
67
+
68
+ print("Prediction:", processor.batch_decode(predicted_ids))
69
+ print("Reference:", test_dataset["sentence"][:2])
70
+ ```
71
+
72
+
73
+ ## Evaluation
74
+
75
+ The model can be evaluated as follows on the hindi test data of Common Voice.
76
+
77
+
78
+ ```python
79
+ import torch
80
+ import torchaudio
81
+ from datasets import load_dataset, load_metric
82
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
83
+ import re
84
+
85
+ test_dataset = load_dataset("common_voice", "hi", split="test")
86
+ wer = load_metric("wer")
87
+
88
+ processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
89
+ model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
90
+ model.to("cuda")
91
+
92
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
93
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
94
+
95
+ # Preprocessing the datasets.
96
+ # We need to read the aduio files as arrays
97
+ def speech_file_to_array_fn(batch):
98
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
99
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
100
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
101
+ return batch
102
+
103
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
104
+
105
+ # Preprocessing the datasets.
106
+ # We need to read the aduio files as arrays
107
+ def evaluate(batch):
108
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
109
+
110
+ with torch.no_grad():
111
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
112
+
113
+ pred_ids = torch.argmax(logits, dim=-1)
114
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
115
+ return batch
116
+
117
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
118
+
119
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
120
+ ```
121
+
122
+ **Test Result**: 72.62 %
123
+
124
+
125
+ ## Training
126
+
127
+ The script used for training can be found [Hindi ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1m-F7et3CHT_kpFqg7UffTIwnUV9AKgrg?usp=sharing)