File size: 2,051 Bytes
d46e5f6
 
6110224
 
 
 
 
9c90bab
fa5fa19
19d71c7
d3b6cd8
6110224
 
 
8aa7211
6110224
61775ee
8aa7211
ba1521b
8f50be4
ba1521b
8aa7211
 
 
 
 
 
 
 
 
c20a5f3
 
dee0751
c20a5f3
 
8aa7211
 
 
9d5b48b
 
 
8aa7211
 
 
6110224
 
 
 
 
 
 
 
5ffa0ad
6110224
 
 
 
 
 
 
 
fa5fa19
6110224
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
datasets: Hemanth-thunder/en_ta
language:
- ta
- en
widget:
- text: A room without books is like a body without a soul
- text: hardwork never fail
- text: Be the change that you wish to see in the world.
- text: i love seeing moon
pipeline_tag: text2text-generation
---

# English to Tamil Translation Model

This model translates English sentences into Tamil using a fine-tuned version of the [Mr-Vicky](https://huggingface.co/Mr-Vicky-01/Fine_tune_english_to_tamil) available on the Hugging Face model hub. 

## About the Authors
This model was developed by [suriya7](https://huggingface.co/suriya7) in collaboration with [Mr-Vicky](https://huggingface.co/Mr-Vicky-01). 

## Usage

To use this model, you can either directly use the Hugging Face `transformers` library or you can use the model via the Hugging Face inference API.


### Model Information

Training Details

- **This model has been fine-tuned for English to Tamil translation.**
- **Training Duration: Over 10 hours**
- **Loss Achieved: 0.6**
- **Model Architecture**
- **The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.**

### Installation
To use this model, you'll need to have the `transformers` library installed. You can install it via pip:
```bash
pip install transformers
```
### Via Transformers Library

You can use this model in your Python code like this:

## Inference
1. **How to use the model in our notebook**:
```python
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

checkpoint = "suriya7/English-to-Tamil"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)

def language_translator(text):
    tokenized = tokenizer([text], return_tensors='pt')
    out = model.generate(**tokenized, max_length=128)
    return tokenizer.decode(out[0],skip_special_tokens=True)

text_to_translate = "hardwork never fail"
output = language_translator(text_to_translate)
print(output)
```