TheBloke commited on
Commit
ee718c7
1 Parent(s): 643c12d

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +49 -32
README.md CHANGED
@@ -1,8 +1,6 @@
1
  ---
2
  inference: false
3
  license: other
4
- datasets:
5
- - gozfarb/ShareGPT_Vicuna_unfiltered
6
  ---
7
 
8
  <!-- header start -->
@@ -30,47 +28,65 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
30
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
31
  * [ctransformers](https://github.com/marella/ctransformers)
32
 
33
- ## Other repositories available
34
 
35
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/Aeala/VicUnlocked-alpaca-65b-4bit)
36
- * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/VicUnlocked-alpaca-65B-QLoRA-GGML)
37
- * [Original unquantised fp16 model in HF format](https://huggingface.co/TheBloke/VicUnlocked-alpaca-65B-QLoRA-fp16)
38
 
39
- ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
 
40
 
41
- llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
42
 
43
- I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
44
 
45
- ## Provided files
46
- | Name | Quant method | Bits | Size | RAM required | Use case |
47
- | ---- | ---- | ---- | ---- | ---- | ----- |
48
- | VicUnlocked-Alpaca-65B.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB | 39.23 GB | 4-bit. |
49
- | VicUnlocked-Alpaca-65B.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
50
- | VicUnlocked-Alpaca-65B.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
51
- | VicUnlocked-Alpaca-65B.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
52
- | VicUnlocked-Alpaca-65B.ggmlv3.q8_0.bin | q8_0 | 8 | 69.37 GB | 71.87 GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for most use cases. |
53
 
54
- ### q8_0 file requires expansion from archive
55
 
56
- **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q8_0 file in a multi-part ZIP file. The ZIP is not compressed, it is just storing the .bin file in two parts.
57
 
58
- To decompress it, please download
59
- * `VicUnlocked-Alpaca-65B.ggmlv3.q8_0.zip`
60
- * `VicUnlocked-Alpaca-65B.ggmlv3.q8_0.z01`
61
 
62
- and extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
63
- ```
64
- sudo apt update -y && sudo apt install 7zip
65
- 7zz x VicUnlocked-Alpaca-65B.ggmlv3.q8_0.zip # Once the q8_0.bin is extracted you can delete the .zip and .z01
66
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  ## How to run in `llama.cpp`
69
 
70
  I use the following command line; adjust for your tastes and needs:
71
 
72
  ```
73
- ./main -t 10 -ngl 32 -m VicUnlocked-Alpaca-65B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
74
  ```
75
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
76
 
@@ -102,12 +118,13 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
102
  * Patreon: https://patreon.com/TheBlokeAI
103
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
104
 
105
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
 
 
106
 
107
  Thank you to all my generous patrons and donaters!
108
- <!-- footer end -->
109
 
110
- Patreon special mention: Jonathan Leane; Talal Aujan. Thank you both, and to all my other patrons and donaters.
111
 
112
  # Original model card: Aeala's VicUnlocked Alpaca 65B QLoRA
113
 
@@ -125,7 +142,7 @@ Please note that this is a highly experimental LoRA model. It may do some good s
125
  ### Response:
126
  ```
127
 
128
- Current upload: checkpoint of step 1200 in training.
129
 
130
 
131
  ## Benchmarks
 
1
  ---
2
  inference: false
3
  license: other
 
 
4
  ---
5
 
6
  <!-- header start -->
 
28
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
  * [ctransformers](https://github.com/marella/ctransformers)
30
 
31
+ ## Repositories available
32
 
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/VicUnlocked-alpaca-65B-QLoRA-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/VicUnlocked-alpaca-65B-QLoRA-GGML)
35
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/VicUnlocked-alpaca-65B-QLoRA-fp16)
36
 
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
 
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
 
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
 
44
+ They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
 
 
 
 
 
 
 
45
 
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
 
48
+ These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
49
 
50
+ They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
 
 
51
 
52
+ ## Explanation of the new k-quant methods
53
+
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
+
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
+
65
+ ## Provided files
66
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
+ | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | VicUnlocked-Alpaca-65B.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB | 39.23 GB | Original llama.cpp quant method, 4-bit. |
69
+ | VicUnlocked-Alpaca-65B.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
70
+ | VicUnlocked-Alpaca-65B.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
71
+ | VicUnlocked-Alpaca-65B.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
72
+ | vicunlocked-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.33 GB | 29.83 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
73
+ | vicunlocked-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.55 GB | 37.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
74
+ | vicunlocked-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.40 GB | 33.90 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
75
+ | vicunlocked-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.06 GB | 30.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
76
+ | vicunlocked-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.28 GB | 41.78 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
77
+ | vicunlocked-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.73 GB | 39.23 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
78
+ | vicunlocked-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.20 GB | 48.70 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
79
+ | vicunlocked-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.89 GB | 47.39 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
80
+
81
+
82
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
83
 
84
  ## How to run in `llama.cpp`
85
 
86
  I use the following command line; adjust for your tastes and needs:
87
 
88
  ```
89
+ ./main -t 10 -ngl 32 -m vicunlocked-65b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
90
  ```
91
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
92
 
 
118
  * Patreon: https://patreon.com/TheBlokeAI
119
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
120
 
121
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
122
+
123
+ **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
124
 
125
  Thank you to all my generous patrons and donaters!
 
126
 
127
+ <!-- footer end -->
128
 
129
  # Original model card: Aeala's VicUnlocked Alpaca 65B QLoRA
130
 
 
142
  ### Response:
143
  ```
144
 
145
+ Current upload: checkpoint of a retrain at ~1000 steps with fixed QLoRA repo. (**6/4/2023**)
146
 
147
 
148
  ## Benchmarks