yinsong1986 commited on
Commit
9416971
1 Parent(s): 8c3ba9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +271 -0
README.md CHANGED
@@ -1,3 +1,274 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ inference: false
4
  ---
5
+
6
+ # MistralLite Model
7
+
8
+ MistralLite is a fine-tuned [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) language model, with enhanced capblities of processing long context (up to 36K tokens). By utilizing an adapted Rotary Embedding and sliding window during fine-tuning, MistralLight is able to **perform signficantly better on several long context retrieve and answering tasks**, while keeping the simple model structure of the original model. MistralLite is useful for applications such as long context line and topic retrieval, summarization, question-answering, and etc. MistralLite can be deployed on a single AWS `g5.2x` instance with Sagemaker [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) endpoint, making it suitable for applications that require high performance in resource-constrained environments. You can also serve the MistralLite model directly using TGI docker containers. Also, MistralLite supports other ways of serving like [vLLM](https://github.com/vllm-project/vllm), and you can use MistralLite in Python by using the [HuggingFace transformers](https://huggingface.co/docs/transformers/index) and [FlashAttention-2](https://github.com/Dao-AILab/flash-attention) library.
9
+
10
+ MistralLight evolves from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), and their similarities and differences are summarized below:
11
+ |Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation| Sliding Window Size|
12
+ |----------|-------------:|-------------:|------------:|-----------:|-----------:|
13
+ | Mistral-7B-v0.1 | No | No | 36K | rope_theta = 10000 | 4096 |
14
+ | MistralLight | Yes | No | 36K | **rope_theta = 1000000** | **16384** |
15
+
16
+ ## Motivation of Developing MistralLite
17
+
18
+ Since the release of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the model became increasingly popular because its strong performance on a wide range of benchmarks. But most of the benchmarks are evaluated on `short context`, and not much has been investigated on its performance on long context tasks. Then We evaluated `Mistral-7B-Instruct-v0.1` against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer context. Although the performance of the models on long context was fairly competitive on long context less than 4096 tokens, there were some discrepencies on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and got `Mistrallite`. The model managed to `signifantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below:
19
+
20
+ ### [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) ###
21
+ |Model Name|Input length| Input length | Input length| Input length| Input length|
22
+ |----------|-------------:|-------------:|------------:|-----------:|-----------:|
23
+ | | 2851| 5568 |8313 | 11044 | 13780
24
+ | Mistral-7B-Instruct-v0.1 | 90% | 0% | 0% | 0% | 0% |
25
+ | MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** |
26
+
27
+ ### [Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/#longeval-results) ###
28
+
29
+ |Model Name|Input length| Input length | Input length| Input length| Input length|Input length|
30
+ |----------|-------------:|-------------:|------------:|-----------:|-----------:|-----------:|
31
+ | | 3818| 5661 |7505 | 9354 | 11188 | 12657
32
+ | Mistral-7B-Instruct-v0.1 | **98%** | 62% | 42% | 42% | 32% | 30% |
33
+ | MistralLite | **98%** | **92%** | **88%** | **76%** | **70%** | **60%** |
34
+
35
+ ### [Pass key Retrieval](https://github.com/epfml/landmark-attention/blob/main/llama/run_test.py#L101) ###
36
+
37
+ |Model Name|Input length| Input length | Input length| Input length|
38
+ |----------|-------------:|-------------:|------------:|-----------:|
39
+ | | 3264| 5396 |8329 | 10197 |
40
+ | Mistral-7B-Instruct-v0.1 | **100%** | 50% | 20% | 30% |
41
+ | MistralLite | **100%** | **100%** | **100%** | **100%** |
42
+
43
+ ### [Question Answering with Long Input Texts](https://nyu-mll.github.io/quality/) ###
44
+ |Model Name| Test set Accuracy | Hard subset Accuracy|
45
+ |----------|-------------:|-------------:|
46
+ | Mistral-7B-Instruct-v0.1 | 44.3% | 39.7% |
47
+ | MistralLite | **64.4%** | **56.2%** |
48
+
49
+
50
+ ## Model Details
51
+
52
+ - **Developed by:** [AWS Contributors](https://github.com/orgs/aws-samples/teams/aws-prototype-ml-apac)
53
+ - **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
54
+ - **Language:** English
55
+ - **Finetuned from weights:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
56
+ - **Finetuned on data:**
57
+ - [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled)
58
+ - [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions)
59
+ - [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1)
60
+ - **Supported Serving Framework:**
61
+ - [Text-Generation-Inference 1.1.0](https://github.com/huggingface/text-generation-inference/tree/v1.1.0)
62
+ - [vLLM](https://github.com/vllm-project/vllm)
63
+ - [HuggingFace transformers](https://huggingface.co/docs/transformers/index)
64
+ - [HuggingFace Text Generation Inference (TGI) container on SageMaker](https://github.com/awslabs/llm-hosting-container)
65
+ - **Model License:** Apache 2.0
66
+ - **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
67
+
68
+ ## How to Use MistralFlite from Python Code ##
69
+ ### Install the necessary packages
70
+
71
+ Requires: [transformers](https://pypi.org/project/transformers/) 4.34.0 or later, and [flash-attn](https://pypi.org/project/flash-attn/) 2.3.1.post1 or later.
72
+
73
+ ```shell
74
+ pip install transformers==4.34.0
75
+ pip install flash-attn==2.3.1.post1 --no-build-isolation
76
+ ```
77
+ ### You can then try the following example code
78
+
79
+ ```python
80
+ from transformers import AutoModelForCausalLM, AutoTokenizer
81
+ import transformers
82
+ import torch
83
+
84
+ model_id = "amazon/MistralLite"
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
87
+ model = AutoModelForCausalLM.from_pretrained(model_id,
88
+ torch_dtype=torch.bfloat16,
89
+ use_flash_attention_2=True,
90
+ device_map="auto",)
91
+ pipeline = transformers.pipeline(
92
+ "text-generation",
93
+ model=model,
94
+ tokenizer=tokenizer,
95
+ )
96
+ prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>"
97
+
98
+ sequences = pipeline(
99
+ prompt,
100
+ max_new_tokens=200,
101
+ do_sample=False,
102
+ return_full_text=False,
103
+ num_return_sequences=1,
104
+ eos_token_id=tokenizer.eos_token_id,
105
+ )
106
+ for seq in sequences:
107
+ print(f"{seq['generated_text']}")
108
+ ```
109
+ **Important** - Use the prompt template below for MistralLite:
110
+ ```
111
+ <|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>
112
+ ```
113
+
114
+ ## How to Deploy MistralFlite on Amazon SageMaker ##
115
+ ### Install the necessary packages
116
+
117
+ Requires: [sagemaker](https://pypi.org/project/sagemaker/) 2.192.1 or later.
118
+
119
+ ```shell
120
+ pip install sagemaker==2.192.1
121
+ ```
122
+
123
+ ### Deploy the Model as A SageMaker Endpoint ###
124
+ To deploy MistralLite on a SageMaker endpoint, please follow the example code as below.
125
+ ```python
126
+ import sagemaker
127
+ from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
128
+ import time
129
+
130
+ sagemaker_session = sagemaker.Session()
131
+ region = sagemaker_session.boto_region_name
132
+ role = sagemaker.get_execution_role()
133
+
134
+ image_uri = get_huggingface_llm_image_uri(
135
+ backend="huggingface", # or lmi
136
+ region=region,
137
+ version="1.1.0"
138
+ )
139
+
140
+ model_name = "MistralLite-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
141
+
142
+ hub = {
143
+ 'HF_MODEL_ID':'amazon/MistralLite',
144
+ 'HF_TASK':'text-generation',
145
+ 'SM_NUM_GPUS':'1',
146
+ 'HF_MODEL_QUANTIZE':'true'
147
+ }
148
+
149
+ model = HuggingFaceModel(
150
+ name=model_name,
151
+ env=hub,
152
+ role=role,
153
+ image_uri=image_uri
154
+ )
155
+ predictor = model.deploy(
156
+ initial_instance_count=1,
157
+ instance_type="ml.g5.2xlarge",
158
+ endpoint_name=model_name
159
+ )
160
+ ```
161
+
162
+ ### Perform Inference ###
163
+ To call the endpoint, please follow the example code as below:
164
+
165
+ ```python
166
+ input_data = {
167
+ "inputs": "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
168
+ "parameters": {
169
+ "do_sample": False,
170
+ "max_new_tokens": 100,
171
+ }
172
+ }
173
+ predictor.predict(input_data)
174
+ ```
175
+ or via [boto3](https://pypi.org/project/boto3/), and the example code is shown as below:
176
+
177
+ ```python
178
+ import boto3
179
+ import json
180
+ def call_endpoint(client, prompt, endpoint_name, paramters):
181
+ client = boto3.client("sagemaker-runtime")
182
+ payload = {"inputs": prompt,
183
+ "parameters": parameters}
184
+ response = client.invoke_endpoint(EndpointName=endpoint_name,
185
+ Body=json.dumps(payload),
186
+ ContentType="application/json")
187
+ output = json.loads(response["Body"].read().decode())
188
+ result = output[0]["generated_text"]
189
+ return result
190
+
191
+ client = boto3.client("sagemaker-runtime")
192
+ parameters = {
193
+ "max_new_tokens": 250,
194
+ "do_sample": True,
195
+ "temperature": None,
196
+ "use_cache": True,
197
+ "seed": 1,
198
+ }
199
+ endpoint_name = "your-endpoint-name-here""
200
+ prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>"
201
+ result = call_endpoint(client, prompt, endpoint_name, paramters)
202
+ print(result)
203
+ ```
204
+
205
+ ## How to Serve MistralFlite on TGI ##
206
+
207
+ ### Start TGI server ###
208
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
209
+
210
+ Example Docker parameters:
211
+
212
+ ```shell
213
+ --model-id amazon/MistralLite --port 3000 --max-input-length 8192 --max-total-tokens 16384 --max-batch-prefill-tokens 16384
214
+ ```
215
+
216
+ ### Perform Inference ###
217
+ Example Python code for inference with TGI (requires huggingface-hub 0.17.0 or later):
218
+
219
+ ```shell
220
+ pip3 install huggingface-hub==0.17.0
221
+ ```
222
+
223
+ ```python
224
+ from huggingface_hub import InferenceClient
225
+
226
+ endpoint_url = "https://your-endpoint-url-here"
227
+
228
+ prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>"
229
+
230
+ client = InferenceClient(endpoint_url)
231
+ response = client.text_generation(prompt,
232
+ max_new_tokens=100,
233
+ do_sample=False,
234
+ temperature=None,
235
+ )
236
+
237
+ print(f"Model output: {response}")
238
+ ```
239
+
240
+ **Important** - When using MistralLite for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.
241
+
242
+ ## How to Serve MistralFlite on vLLM ##
243
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
244
+
245
+ ### Using vLLM as a server ###
246
+ When using vLLM as a server, pass the --model amazon/MistralLite parameter, for example:
247
+ ```shell
248
+ python3 python -m vllm.entrypoints.api_server --model amazon/MistralLite
249
+ ```
250
+
251
+ ### Using vLLM in Python Code ###
252
+ When using vLLM from Python code, Please see the example code as below:
253
+
254
+ ```python
255
+ from vllm import LLM, SamplingParams
256
+
257
+ prompts = [
258
+ "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>",
259
+ ]
260
+ sampling_params = SamplingParams(temperature=0, max_tokens=100)
261
+
262
+ llm = LLM(model="amazon/MistralLite",)
263
+
264
+ outputs = llm.generate(prompts, sampling_params)
265
+
266
+ # Print the outputs.
267
+ for output in outputs:
268
+ prompt = output.prompt
269
+ generated_text = output.outputs[0].text
270
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
271
+ ```
272
+
273
+ ## Limitations ##
274
+ Before using the MistralLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.