Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ The training recipe was based on wsj recipe in [espnet](https://github.com/espne
|
|
22 |
|
23 |
<!-- Provide a longer summary of what this model is. -->
|
24 |
|
25 |
-
This model is Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
|
26 |
|
27 |
This model was trained on Thai-central to be used as a supervised pre-trained model in order to be used for finetuning to other Thai dialects. (Experiment 2 in the paper).
|
28 |
|
@@ -41,7 +41,7 @@ from pythainlp import word_tokenize
|
|
41 |
tokenized_sentence_list = word_tokenize(<your_sentence>)
|
42 |
```
|
43 |
|
44 |
-
The CER and WER results on test set are:
|
45 |
|
46 |
CER = 2.0
|
47 |
|
|
|
22 |
|
23 |
<!-- Provide a longer summary of what this model is. -->
|
24 |
|
25 |
+
This model is a Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
|
26 |
|
27 |
This model was trained on Thai-central to be used as a supervised pre-trained model in order to be used for finetuning to other Thai dialects. (Experiment 2 in the paper).
|
28 |
|
|
|
41 |
tokenized_sentence_list = word_tokenize(<your_sentence>)
|
42 |
```
|
43 |
|
44 |
+
The CER and WER results on the test set are:
|
45 |
|
46 |
CER = 2.0
|
47 |
|