Update README.md
Browse files
README.md
CHANGED
@@ -11,16 +11,31 @@ widget:
|
|
11 |
pipeline_tag: text2text-generation
|
12 |
---
|
13 |
|
14 |
-
|
15 |
-
- **Model Name**: English-Tamil-Translator
|
16 |
-
- **Language**: Python
|
17 |
-
- **Task**: Language Translation
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
## Inference
|
26 |
1. **How to use the model in our notebook**:
|
@@ -38,7 +53,7 @@ def language_translator(text):
|
|
38 |
out = model.generate(**tokenized, max_length=128)
|
39 |
return tokenizer.decode(out[0],skip_special_tokens=True)
|
40 |
|
41 |
-
text_to_translate = "i
|
42 |
output = language_translator(text_to_translate)
|
43 |
print(output)
|
44 |
```
|
|
|
11 |
pipeline_tag: text2text-generation
|
12 |
---
|
13 |
|
14 |
+
# English to Tamil Translation Model
|
|
|
|
|
|
|
15 |
|
16 |
+
This model translates English sentences into Tamil using a fine-tuned version of the Mr-Vicky-01/Fine_tune_english_to_tamil model available on the Hugging Face model hub.
|
17 |
+
|
18 |
+
## Usage
|
19 |
+
|
20 |
+
To use this model, you can either directly use the Hugging Face `transformers` library or you can use the model via the Hugging Face inference API.
|
21 |
+
|
22 |
+
|
23 |
+
### Model Information
|
24 |
+
|
25 |
+
Training Details
|
26 |
+
|
27 |
+
**This model has been fine-tuned for English to Tamil translation.**
|
28 |
+
**Training Duration: Over 10 hours**
|
29 |
+
**Loss Achieved: 0.7**
|
30 |
+
**Model Architecture**
|
31 |
+
**The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.**
|
32 |
+
|
33 |
+
### Installation
|
34 |
+
To use this model, you'll need to have the `transformers` library installed. You can install it via pip:
|
35 |
+
|
36 |
+
### Via Transformers Library
|
37 |
+
|
38 |
+
You can use this model in your Python code like this:
|
39 |
|
40 |
## Inference
|
41 |
1. **How to use the model in our notebook**:
|
|
|
53 |
out = model.generate(**tokenized, max_length=128)
|
54 |
return tokenizer.decode(out[0],skip_special_tokens=True)
|
55 |
|
56 |
+
text_to_translate = "i love coding"
|
57 |
output = language_translator(text_to_translate)
|
58 |
print(output)
|
59 |
```
|