Edit model card

Fine tuned MT5 base model with Sinhala Wikipedia Dataset

This model is fine tuned with articles from Sinhala Wikipedia for article generation. Used around 10,000 articles for training and fine tuned more than 100 times.

How to use

We have to use "writeWiki: " part at the begining of each prompt.

You can use this model with a pipeline for text generation.

First you might need to install required libraries and import them.

!pip uninstall transformers -y
!pip install transformers

pip install tokenizers sentencepiece

Then we might need to restart the runtime either manually or use the below code to end it.

import os
os.kill(os.getpid(), 9)

Then we just have to import the tokenizer and run the pipeline:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/mt5-base')

from transformers import pipeline
generator = pipeline(model='Suchinthana/MT5-Sinhala-Wikigen-Experimental', tokenizer=tokenizer)
generator("writeWiki: මානව ආහාර", do_sample=True, max_length=180)
Downloads last month
20
Safetensors
Model size
300M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Suchinthana/MT-5-Sinhala-Wikigen

Space using Suchinthana/MT-5-Sinhala-Wikigen 1