File size: 4,698 Bytes
477a2db
 
 
 
7256c50
477a2db
 
 
ddd6b0a
 
7256c50
477a2db
 
 
 
 
7256c50
477a2db
7256c50
477a2db
22e8baa
7256c50
477a2db
22e8baa
477a2db
 
 
 
 
 
 
 
 
 
 
 
22e8baa
7256c50
 
22e8baa
c40ca08
a701e25
 
944519f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7256c50
a701e25
7a50f15
 
ddd6b0a
 
22e8baa
 
 
 
ddd6b0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22e8baa
477a2db
7256c50
477a2db
 
22e8baa
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
---
license: apache-2.0
tags:
- generated_from_trainer
- instruction fine-tuning
model-index:
- name: flan-t5-small-distil-v2
  results: []
language:
- en
pipeline_tag: text2text-generation
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# LaMini-FLAN-T5-Small

This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on [LaMini dataset]() that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository]().  

## Training Procedure
We initialize with [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) and fine-tune it on our [LaMini dataset](). Its total number of parameters is 61M. 

### Training Hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5

## Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). 

## More Models
You can download LaMini model series as follow. Note that not all models are performing as well. More details can be seen in our [paper](). 
<details>
<summary> Click to expand </summary>
<table>
    <caption>
    LaMini Language Models collection.
  </caption>
  <thead>
    <tr>
      <th>Name</th>
      <th>Architecture</th>
      <th>Initialization</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>LaMini-T5-61M</td>
      <td>encoder-decoder</td>
      <td>T5-small</td>
    </tr>
    <tr>
      <td>LaMini-T5-223M</td>
      <td>encoder-decoder</td>
      <td>T5-base</td>
    </tr>
    <tr>
      <td>LaMini-T5-738M</td>
      <td>encoder-decoder</td>
      <td>T5-large</td>
    </tr>
    <tr>
      <td>LaMini-Flan-T5-77M</td>
      <td>encoder-decoder</td>
      <td>Flan-T5-small</td>
    </tr>
    <tr>
      <td>LaMini-Flan-T5-248M</td>
      <td>encoder-decoder</td>
      <td>Flan-T5-base</td>
    </tr>
    <tr>
      <td>LaMini-Flan-T5-783M</td>
      <td>encoder-decoder</td>
      <td>Flan-T5-large</td>
    </tr>
    <tr>
      <td>LaMini-Cb-111M</td>
      <td>decoder-only</td>
      <td>Cerebras-GPT-111M</td>
    </tr>
    <tr>
      <td>LaMini-Cb-256M</td>
      <td>decoder-only</td>
      <td>Cerebras-GPT-256M</td>
    </tr>
    <tr>
      <td>LaMini-Cb-590M</td>
      <td>decoder-only</td>
      <td>Cerebras-GPT-590M</td>
    </tr>
    <tr>
      <td>LaMini-Cb-1.3B</td>
      <td>decoder-only</td>
      <td>Cerebras-GPT-1.3B</td>
    </tr>
    <tr>
      <td>LaMini-GPT-124M</td>
      <td>decoder-only</td>
      <td>GPT-2</td>
    </tr>
    <tr>
      <td>LaMini-GPT-774M</td>
      <td>decoder-only</td>
      <td>GPT-2 large</td>
    </tr>
    <tr>
      <td>LaMini-GPT-1.5B</td>
      <td>decoder-only</td>
      <td>GPT-2 xl</td>
    </tr>
  </tbody>
</table>

</details>


## Use

### Intended use


We now show you how to load and use our model using HuggingFace `pipline()`. 
### CPU

<details>
<summary> Click to expand </summary>

```python
# pip install -q transformers
from transformers import pipeline

checkpoint = "{model_name}"

model = pipeline('text2text-generation', model=checkpoint, use_auth_token=True)

input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = generator(input_prompt, max_length=512, do_sample=True, repetition_penalty=1.5)[0]['generated_text']

print("Response": generated_text)
```

</details>

### GPU

<details>
<summary> Click to expand </summary>

```python
# pip install -q transformers
from transformers import pipeline

checkpoint = "{model_name}"

model = pipeline('text2text-generation', model=checkpoint, use_auth_token=True, device=0)

input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
generated_text = generator(input_prompt, max_length=512, do_sample=True, repetition_penalty=1.5)[0]['generated_text']

print("Response": generated_text)
```

</details>

## Limitations

More information needed


# Citation
```bibtex
@misc{,
      title={LaMini: Distilling Knowledge from Large Language Models}, 
      author={},
      year={2023},
      eprint={},
      archivePrefix={},
      primaryClass={}
}
```