gpt2-persian / README.md
khashei's picture
updated model name
8bf8807
|
raw
history blame
No virus
2.06 kB
metadata
language: fa
license: apache-2.0
tags:
  - farsi
  - persian

GPT2-Persian

Bolbol-zaban/gpt2-persian is gpt2 language model that is trained with hyper parameters similar to standard gpt2-medium with two differences:

  1. The context size is reduced from 1024 to 256 sub words in order to make the training affordable
  2. Instead of BPE, google sentence piece tokenizor is used for tokenization.
  3. The training dataset only include Persian text. All non-persian characters are replaced with especial tokens (e.g [LAT], [URL], [NUM])

Please refer to this blog post for further detail. Also try the model here or on Bolbolzaban.com.

How to use

You can use this model directly with a pipeline for text generation:

from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel
tokenizer = AutoTokenizer.from_pretrained('bolbolzaban/gpt2-persian')
model = GPT2LMHeadModel.from_pretrained('bolbolzaban/gpt2-persian')
generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':256})
sample = generator('در یک اتفاق شگفت انگیز، پژوهشگران')

If you are using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel.

Acknowledgment

This project is supported by Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).

Citation and Reference

Please reference "bolbolzaban.com" website if you are using gpt2-persian in your research or commertial application.

Contacts

Bolbolzaban.com, Twitter, Telegram, Instagram, Linkedin