davidxmle commited on
Commit
82209ff
1 Parent(s): 21a1bb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -3,3 +3,64 @@ license: other
3
  license_name: llama-3-community-license
4
  license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: llama-3-community-license
4
  license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
5
  ---
6
+ <!-- header start -->
7
+ <!-- 200823 -->
8
+ <div style="width: auto; margin-left: auto; margin-right: auto">
9
+ <img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
10
+ </div>
11
+ <div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>
12
+
13
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div>
14
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
15
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
16
+ <!-- header end -->
17
+
18
+ # Llama-3-8B-Instruct-GPTQ-4-Bit
19
+ - Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama)
20
+ - Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
21
+ - Built with Meta Llama 3
22
+ - Quantized by [Astronomer](https://astronomer.io)
23
+
24
+ <!-- description start -->
25
+ ## Description
26
+
27
+ This repo contains 4 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
28
+
29
+ This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).
30
+
31
+ The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
32
+
33
+ <!-- description end -->
34
+
35
+ ## GPTQ Quantization Method
36
+ - This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
37
+ - Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
38
+
39
+ | Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
40
+ | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
41
+ | [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.09 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest Model possible with tiny accuracy loss |
42
+ | More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
43
+
44
+ ## Serving this GPTQ model using vLLM
45
+ Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).
46
+
47
+ Tested with the below command
48
+ ```
49
+ python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16
50
+ ```
51
+ For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint
52
+ Example:
53
+ ```
54
+ {
55
+ "model": "Llama-3-8B-Instruct-GPTQ-4-Bit",
56
+ "messages": [
57
+ {"role": "system", "content": "You are a helpful assistant."},
58
+ {"role": "user", "content": "Who created Llama 3?"}
59
+ ],
60
+ "max_tokens": 2000,
61
+ "stop_token_ids":[128001,128009]
62
+ }
63
+ ```
64
+
65
+ ### Contributors
66
+ - Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)