File size: 3,157 Bytes
5293f64
 
 
 
 
 
 
 
 
 
 
 
 
 
e38540a
 
 
5293f64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: cc-by-4.0
---

# #llama-3 #roleplay

GGUF-IQ-Imatrix quants for [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3).

> [!IMPORTANT]  
> These quants have already been done after the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920). <br>
> Use **KoboldCpp version 1.64** or higher.

> [!NOTE]
> **Prompt formatting...** <br>
> Prompt format is relatively simple, author seems to recommend **ChatML**.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/VNpZl0O7dpwWLK8i5RG5d.png)

# Original model information by the author:

Now not overtrained and with the tokenizer fix to base llama3. Trained for 3 epochs.

The latest TheSpice, dipped in Mama Liz's LimaRP Oil.
I've focused on making the model more flexible and provide a more unique experience. 
I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach.
This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.


## Datasets Used

* Capybara
* Claude Multiround 30k
* Augmental
* ToxicQA
* Yahoo Answers
* Airoboros 3.1
* LimaRP

## Features ( Examples from 0.1.1 because I'm too lazy to take new screenshots. Its tested tho. )

Narration

If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward.

# You can look at anything mostly as long as you end it with "What do I see?"

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/VREY8QHtH6fCL0fCp8AAC.png)

# You can also request to know what a character is thinking or planning.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/U3RTAgbaB2m1ygfZGJ-SM.png)

# You can ask for a quick summary on the character as well.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/uXFd6GhnXS8w_egUEfcAp.png)

# Before continuing the conversation as normal.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/dYTQUdCshUDtp_BJ20tHy.png)

## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/59vi4VWP2d0bCbsW2eU8h.png)

If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this. 
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/mB3wZqtwN8B45nR7W1fgR.png)

```
{System Prompt}

Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}

```
## Presets

All screenshots above were taken with the below SillyTavern Preset.
## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05)
This is a roughly equivalent Kobold Horde Preset.
## Recommended Kobold Horde Preset -> MinP


# Disclaimer

Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks!