File size: 4,678 Bytes
2544d72
a3e0e60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2544d72
 
 
a3e0e60
 
 
 
 
 
 
 
 
 
 
 
 
 
2544d72
82209ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
inference: false
model_creator: astronomer-io
model_name: Meta-Llama-3-8B-Instruct
model_type: llama
pipeline_tag: text-generation
prompt_template: >-
  {% set loop_messages = messages %}{% for message in loop_messages %}{% set
  content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>


  '+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set
  content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if
  add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>


  ' }}{% endif %}
quantized_by: davidxmle
license: other
license_name: llama-3-community-license
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
tags:
- llama
- llama-3
- facebook
- meta
- astronomer
- gptq
- pretrained
- quantized
- finetuned
- autotrain_compatible
- endpoints_compatible
datasets:
- wikitext
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>

<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->

# Llama-3-8B-Instruct-GPTQ-4-Bit
- Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama)
- Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- Built with Meta Llama 3
- Quantized by [Astronomer](https://astronomer.io)

<!-- description start -->
## Description

This repo contains 4 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).

This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).

The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput.

<!-- description end -->

## GPTQ Quantization Method
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.

| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.09 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest Model possible with tiny accuracy loss | 
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |

## Serving this GPTQ model using vLLM
Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).

Tested with the below command
```
python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16
```
For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint
Example:
```
{
    "model": "Llama-3-8B-Instruct-GPTQ-4-Bit",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who created Llama 3?"}
        ],
    "max_tokens": 2000,
    "stop_token_ids":[128001,128009]
}
```

### Contributors
- Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)