Edit model card

Model Description

This is the model presented in the paper "Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification".

The model is based on mBART-large-50 and trained on two parallel detoxification corpora: ParaDetox and RuDetox. More details about this model are in the paper.

Usage

  1. Model loading.
from transformers import MBartForConditionalGeneration, AutoTokenizer

model = MBartForConditionalGeneration.from_pretrained("s-nlp/mbart-detox-en-ru").cuda()
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50")
  1. Detoxification utility.
def paraphrase(text, model, tokenizer, n=None, max_length="auto", beams=3):
    texts = [text] if isinstance(text, str) else text
    inputs = tokenizer(texts, return_tensors="pt", padding=True)["input_ids"].to(
        model.device
    )
    if max_length == "auto":
        max_length = inputs.shape[1] + 10

    result = model.generate(
        inputs,
        num_return_sequences=n or 1,
        do_sample=True,
        temperature=1.0,
        repetition_penalty=10.0,
        max_length=max_length,
        min_length=int(0.5 * max_length),
        num_beams=beams,
        forced_bos_token_id=tokenizer.lang_code_to_id[tokenizer.tgt_lang]
    )
    texts = [tokenizer.decode(r, skip_special_tokens=True) for r in result]

    if not n and isinstance(text, str):
        return texts[0]
    return texts

Citation

TBD

Downloads last month
9
Safetensors
Model size
611M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train s-nlp/mbart-detox-en-ru