starble-dev commited on
Commit
a427110
1 Parent(s): a0dc2aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -3
README.md CHANGED
@@ -1,3 +1,59 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - mistral
5
+ - conversational
6
+ - text-generation-inference
7
+ base_model:
8
+ - UsernameJustAnother/Nemo-12B-Marlin-v5
9
+ - anthracite-org/magnum-12b-v2
10
+ library_name: transformers
11
+ ---
12
+
13
+ > [!WARNING]
14
+ > **General Use Sampling:**<br>
15
+ > Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#transformers) section.
16
+
17
+ > [!NOTE]
18
+ > **Best Samplers:**<br>
19
+ > I found best success using the following for Starlight-V3-12B:<br>
20
+ > Temperature: `0.7`-`1.2` (Additional stopping strings will be necessary as you increase the temperature)<br>
21
+ > Top K: `-1`<br>
22
+ > Min P: `0.05`<br>
23
+ > Rep Penalty: `1.03-1.1`
24
+
25
+ # Why Version 3?
26
+ Currently the other versions resulted in really bad results that I didn't upload them, the version number is just the internal version.
27
+
28
+ # Goal
29
+ The idea is to keep the strengths of [anthracite-org/magnum-12b-v2](https://huggingface.co/anthracite-org/magnum-12b-v2) while adding some more creativity
30
+ that seems to be lacking in the model. Mistral-Nemo by itself seems to behave less sporadic due to the low temperature needed but this gets a bit repetitive,
31
+ although it's still the best model I've used so far.
32
+
33
+ # Results
34
+ I am not entirely pleased with the result of the merge but it seems okay, though base [anthracite-org/magnum-12b-v2](https://huggingface.co/anthracite-org/magnum-12b-v2)
35
+ might just be better by itself. However, I'll still experiement on different merge methods.
36
+ Leaking of the training data used on both models seems a bit more apparent when using higher temperature values,
37
+ especially the use of author notes on the system prompt. Generally I'd advise to create a stopping string for "```" to avoid the generation of the training data.
38
+
39
+ **Original Models:**
40
+ - [UsernameJustAnother/Nemo-12B-Marlin-v5](https://huggingface.co/UsernameJustAnother/Nemo-12B-Marlin-v5) (Thank you so much for your work ♥)
41
+ - [anthracite-org/magnum-12b-v2](https://huggingface.co/anthracite-org/magnum-12b-v2) (Thank you so much for your work ♥)
42
+
43
+ **Official Quants:**
44
+ PPL = Perplexity, lower is better<br>
45
+ Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact.
46
+ | Quant Type | Note | Size |
47
+ | ---- | ---- | ---- |
48
+ | [Q2_K](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q2_K.gguf) | +3.5199 ppl @ Llama-3-8B | 4.79 GB |
49
+ | [Q3_K_S](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q3_K_S.gguf) | +1.6321 ppl @ Llama-3-8B | 5.53 GB |
50
+ | [Q3_K_M](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q3_K_M.gguf) | +0.6569 ppl @ Llama-3-8B | 6.08 GB |
51
+ | [Q3_K_L](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q3_K_L.gguf) | +0.5562 ppl @ Llama-3-8B | 6.56 GB |
52
+ | [Q4_K_S](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q4_K_S.gguf) | +0.2689 ppl @ Llama-3-8B | 7.12 GB |
53
+ | [Q4_K_M](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q4_K_M.gguf) | +0.1754 ppl @ Llama-3-8B | 7.48 GB |
54
+ | [Q5_K_S](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q5_K_S.gguf) | +0.1049 ppl @ Llama-3-8B | 8.52 GB |
55
+ | [Q5_K_M](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q5_K_M.gguf) | +0.0569 ppl @ Llama-3-8B | 8.73 GB |
56
+ | [Q6_K](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q6_K.gguf) | +0.0217 ppl @ Llama-3-8B | 10.1 GB |
57
+ | [Q8_0](https://huggingface.co/starble-dev/Starlight-V3-12B-GGUF/blob/main/Starlight-V3-12B-Q8_0.gguf) | +0.0026 ppl @ Llama-3-8B | 13.00 GB |
58
+
59
+ **Original Model Licenses & This Model License:** Apache 2.0