starble-dev commited on
Commit
a215768
1 Parent(s): 7846741

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - mistral
5
+ - conversational
6
+ - text-generation-inference
7
+ base_model: UsernameJustAnother/Nemo-12B-Marlin-v5
8
+ library_name: transformers
9
+ ---
10
+
11
+ > [!WARNING]
12
+ > **General Use Sampling:**<br>
13
+ > Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#transformers) section.
14
+
15
+ > [!NOTE]
16
+ > **Best Samplers:**<br>
17
+ > I found best success using the following for Nemo-12B-Marlin-v5:<br>
18
+ > Temperature: `0.7`-`0.8`<br>
19
+ > Top K: `-1`<br>
20
+ > Min P: `0.05`<br>
21
+ > Rep Penalty: `1.03` (Note, it is recommended to increase this as context length increases, I find `1.10` to be good at 16k+ context)
22
+
23
+ Currently this is my favorite Mistral-Nemo finetune to be released.
24
+
25
+ **Original Model:** [UsernameJustAnother/Nemo-12B-Marlin-v5](https://huggingface.co/UsernameJustAnother/Nemo-12B-Marlin-v5) (Thank you so much for your work ♥)
26
+
27
+ **How to Use:** [llama.cpp](https://github.com/ggerganov/llama.cpp)
28
+
29
+ **Original Model License:** Apache 2.0
30
+
31
+ **Release Used:** [b3538](https://github.com/ggerganov/llama.cpp/releases/tag/b3538)
32
+
33
+ # Quants
34
+ PPL = Perplexity, lower is better<br>
35
+ Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact.
36
+ | Quant Type | Note | Size |
37
+ | ---- | ---- | ---- |
38
+ | [Q2_K](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q2_K.gguf) | +3.5199 ppl @ Llama-3-8B | 4.79 GB |
39
+ | [Q3_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_S.gguf) | +1.6321 ppl @ Llama-3-8B | 5.53 GB |
40
+ | [Q3_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_M.gguf) | +0.6569 ppl @ Llama-3-8B | 6.08 GB |
41
+ | [Q3_K_L](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q3_K_L.gguf) | +0.5562 ppl @ Llama-3-8B | 6.56 GB |
42
+ | [Q4_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_S.gguf) | +0.5562 ppl @ Llama-3-8B | 7.12 GB |
43
+ | [Q4_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q4_K_M.gguf) | +0.1754 ppl @ Llama-3-8B | 7.48 GB |
44
+ | [Q5_K_S](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_S.gguf) | +0.1049 ppl @ Llama-3-8B | 8.52 GB |
45
+ | [Q5_K_M](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q5_K_M.gguf) | +0.0569 ppl @ Llama-3-8B | 8.73 GB |
46
+ | [Q6_K](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q6_K.gguf) | +0.0217 ppl @ Llama-3-8B | 10.1 GB |
47
+ | [Q8_0](https://huggingface.co/starble-dev/Nemo-12B-Marlin-v5-GGUF/blob/main/Nemo-12B-Marlin-v5-Q8_0.gguf) | +0.0026 ppl @ Llama-3-8B | 13.00 GB |