DeathReaper0965
commited on
Commit
•
bf4651c
1
Parent(s):
15155c0
Update README.md
Browse files
README.md
CHANGED
@@ -26,10 +26,10 @@ tags:
|
|
26 |
# T5 Context Corrector (base-sized)
|
27 |
t5-context-corrector model is a fine-tuned [T5 model](https://huggingface.co/t5-base) on the [Synthetic GEC](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction) and filtered CommonCrawl data in English Language.
|
28 |
The Base Model(T5) is Pre-trained on C4(Colossal Clean Crawled Corpus) dataset and works with numerous downstrem tasks. <br>
|
29 |
-
Our Model specifically is fine-tuned on a single downstream task of context correction
|
30 |
|
31 |
## Model description
|
32 |
-
This Model has the same architecture as
|
33 |
|
34 |
## Intended Use & Limitations
|
35 |
As the model is intented to correct the context of the given sentence, all you have to do is pass the non-contextually correct sentence and get the corrected response back.<br>
|
|
|
26 |
# T5 Context Corrector (base-sized)
|
27 |
t5-context-corrector model is a fine-tuned [T5 model](https://huggingface.co/t5-base) on the [Synthetic GEC](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction) and filtered CommonCrawl data in English Language.
|
28 |
The Base Model(T5) is Pre-trained on C4(Colossal Clean Crawled Corpus) dataset and works with numerous downstrem tasks. <br>
|
29 |
+
Our Model specifically is fine-tuned on a single downstream task of context correction on the above mentioned two datasets.
|
30 |
|
31 |
## Model description
|
32 |
+
This Model has the same architecture as its base model, thus having 220 Million Parameters while consisting of 12 encoder blocks and 12 decoder blocks with an input embedding size of 32128. Please refer to this [link](https://arxiv.org/pdf/1910.10683.pdf) to know more about the model details.
|
33 |
|
34 |
## Intended Use & Limitations
|
35 |
As the model is intented to correct the context of the given sentence, all you have to do is pass the non-contextually correct sentence and get the corrected response back.<br>
|