davidxmle commited on
Commit
eb85ccd
1 Parent(s): 2c89d70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -41,13 +41,14 @@ datasets:
41
  <p style="margin-top: 0.5em; margin-bottom: 0em;"></p>
42
  </div>
43
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
44
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.linkedin.com/in/david-xue-uva/">Quantized by David Xue from Astronomer</a></p>
45
  </div>
46
  </div>
47
  <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</p></div>
48
- <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de factor company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
49
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
50
  <!-- header end -->
 
51
  # Important Note Regarding a Known Bug in Llama 3
52
  - Two files are modified to address a current issue regarding Llama 3 models keep on generating additional tokens non-stop until hitting max token limit.
53
  - `generation_config.json`'s `eos_token_id` have been modified to add the other EOS token that Llama-3 uses.
@@ -69,6 +70,9 @@ This repo contains 8 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3
69
  <!-- description end -->
70
 
71
  ## GPTQ Quantization Method
 
 
 
72
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | VRAM Size | ExLlama | Desc |
73
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
74
  | [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-8-Bit/tree/main) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.09 GB | No | 8-bit, with Act Order and group size 32g. Minimum accuracy loss with decent VRAM usage reduction. |
 
41
  <p style="margin-top: 0.5em; margin-bottom: 0em;"></p>
42
  </div>
43
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
44
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.linkedin.com/in/david-xue-uva/">Quantized by David Xue @ Astronomer</a></p>
45
  </div>
46
  </div>
47
  <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</p></div>
48
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
49
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
50
  <!-- header end -->
51
+
52
  # Important Note Regarding a Known Bug in Llama 3
53
  - Two files are modified to address a current issue regarding Llama 3 models keep on generating additional tokens non-stop until hitting max token limit.
54
  - `generation_config.json`'s `eos_token_id` have been modified to add the other EOS token that Llama-3 uses.
 
70
  <!-- description end -->
71
 
72
  ## GPTQ Quantization Method
73
+ - This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
74
+ - Quantization is calibration with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
75
+
76
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | VRAM Size | ExLlama | Desc |
77
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
78
  | [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-8-Bit/tree/main) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.09 GB | No | 8-bit, with Act Order and group size 32g. Minimum accuracy loss with decent VRAM usage reduction. |