Text2Text Generation
Transformers
PyTorch
Spanish
mt5
Inference Endpoints
File size: 4,616 Bytes
5ae4277
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---

language: es
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2​ Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."

license: apache-2.0
---


# doc2query/msmarco-spanish-mt5-base-v1

This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).

It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.

## Usage
```python

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

import torch



model_name = 'doc2query/msmarco-spanish-mt5-base-v1'

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)



text = "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2​ Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."





def create_queries(para):

    input_ids = tokenizer.encode(para, return_tensors='pt')

    with torch.no_grad():

        # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality

        sampling_outputs = model.generate(

            input_ids=input_ids,

            max_length=64,

            do_sample=True,

            top_p=0.95,

            top_k=10, 

            num_return_sequences=5

            )

        

        # Here we use Beam-search. It generates better quality queries, but with less diversity

        beam_outputs = model.generate(

            input_ids=input_ids, 

            max_length=64, 

            num_beams=5, 

            no_repeat_ngram_size=2, 

            num_return_sequences=5, 

            early_stopping=True

        )





    print("Paragraph:")

    print(para)

    

    print("\nBeam Outputs:")

    for i in range(len(beam_outputs)):

        query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)

        print(f'{i + 1}: {query}')



    print("\nSampling Outputs:")

    for i in range(len(sampling_outputs)):

        query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)

        print(f'{i + 1}: {query}')



create_queries(text)



```

**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.

## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the  training script, see the `train_script.py` in this repository.

The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. 

This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).