Lewdiculous commited on
Commit
5293f64
1 Parent(s): 4c68f8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -3
README.md CHANGED
@@ -1,3 +1,85 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ # #llama-3 #roleplay
6
+
7
+ GGUF-IQ-Imatrix quants for [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3).
8
+
9
+ > [!IMPORTANT]
10
+ > These quants have already been done after the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920). <br>
11
+ > Use **KoboldCpp version 1.64** or higher.
12
+
13
+ > [!NOTE]
14
+ > **Prompt formatting...** <br>
15
+ > Seems simple so I'd try Alpaca or ChatML and see how it goes.
16
+
17
+ # Original model information by the author:
18
+
19
+ Now not overtrained and with the tokenizer fix to base llama3. Trained for 3 epochs.
20
+
21
+ The latest TheSpice, dipped in Mama Liz's LimaRP Oil.
22
+ I've focused on making the model more flexible and provide a more unique experience.
23
+ I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach.
24
+ This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.
25
+
26
+
27
+ ## Datasets Used
28
+
29
+ * Capybara
30
+ * Claude Multiround 30k
31
+ * Augmental
32
+ * ToxicQA
33
+ * Yahoo Answers
34
+ * Airoboros 3.1
35
+ * LimaRP
36
+
37
+ ## Features ( Examples from 0.1.1 because I'm too lazy to take new screenshots. Its tested tho. )
38
+
39
+ Narration
40
+
41
+ If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward.
42
+
43
+ # You can look at anything mostly as long as you end it with "What do I see?"
44
+
45
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/VREY8QHtH6fCL0fCp8AAC.png)
46
+
47
+ # You can also request to know what a character is thinking or planning.
48
+
49
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/U3RTAgbaB2m1ygfZGJ-SM.png)
50
+
51
+ # You can ask for a quick summary on the character as well.
52
+
53
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/uXFd6GhnXS8w_egUEfcAp.png)
54
+
55
+ # Before continuing the conversation as normal.
56
+
57
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/dYTQUdCshUDtp_BJ20tHy.png)
58
+
59
+ ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )
60
+
61
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/59vi4VWP2d0bCbsW2eU8h.png)
62
+
63
+ If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this.
64
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/mB3wZqtwN8B45nR7W1fgR.png)
65
+
66
+ ```
67
+ {System Prompt}
68
+
69
+ Username: {Input}
70
+ BotName: {Response}
71
+ Username: {Input}
72
+ BotName: {Response}
73
+
74
+ ```
75
+ ## Presets
76
+
77
+ All screenshots above were taken with the below SillyTavern Preset.
78
+ ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05)
79
+ This is a roughly equivalent Kobold Horde Preset.
80
+ ## Recommended Kobold Horde Preset -> MinP
81
+
82
+
83
+ # Disclaimer
84
+
85
+ Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks!