--- license: mit datasets: Hemanth-thunder/en_ta language: - ta - en widget: - text: A room without books is like a body without a soul - text: hardwork never fail - text: Be the change that you wish to see in the world. - text: i love seeing moon pipeline_tag: text2text-generation --- # English to Tamil Translation Model This model translates English sentences into Tamil using a fine-tuned version of the [Mr-Vicky](https://huggingface.co/Mr-Vicky-01/Fine_tune_english_to_tamil) available on the Hugging Face model hub. ## About the Authors This model was developed by [suriya7](https://huggingface.co/suriya7) in collaboration with [Mr-Vicky](https://huggingface.co/Mr-Vicky-01). ## Usage To use this model, you can either directly use the Hugging Face `transformers` library or you can use the model via the Hugging Face inference API. ### Model Information Training Details - **This model has been fine-tuned for English to Tamil translation.** - **Training Duration: Over 10 hours** - **Loss Achieved: 0.6** - **Model Architecture** - **The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.** ### Installation To use this model, you'll need to have the `transformers` library installed. You can install it via pip: ```bash pip install transformers ``` ### Via Transformers Library You can use this model in your Python code like this: ## Inference 1. **How to use the model in our notebook**: ```python # Load model directly import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM checkpoint = "suriya7/English-to-Tamil" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) def language_translator(text): tokenized = tokenizer([text], return_tensors='pt') out = model.generate(**tokenized, max_length=128) return tokenizer.decode(out[0],skip_special_tokens=True) text_to_translate = "hardwork never fail" output = language_translator(text_to_translate) print(output) ```