gpt2-base-french / README.md
ClassCat's picture
Update README.md
5c4bf94
|
raw
history blame
830 Bytes
metadata
language: fr
license: cc-by-sa-4.0
datasets:
  - wikipedia
  - cc100
widget:
  - text: Je vais à la
  - text: J'aime le café
  - text: nous avons parlé
  - text: Je m'appelle

GPT2 French base model (Uncased)

Prerequisites

transformers==4.19.2

Model architecture

This model uses GPT2 base setttings except vocabulary size.

Tokenizer

Using BPE tokenizer with vocabulary size 50,000.

Training Data

  • wiki40b/fr (Spanish Wikipedia)
  • Subset of CC-100/fr : Monolingual Datasets from Web Crawl Data

Usage

from transformers import pipeline

generator = pipeline('text-generation', model='ClassCat/gpt2-base-french')
generator("Je vais à la", max_length=50, num_return_sequences=5)