Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -3,6 +3,8 @@ tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
 
 
6
  ---
7
 
8
  # NemoDori-v0.1-12B-MS
@@ -12,23 +14,23 @@ NemoDori-v0.1-12B-MS is a MODEL STOCK merge of the following models using [LazyM
12
  This is my 'first' merge model, just for testing purpose. I don't know what I'm doing, honestly...
13
 
14
  My experience using this in SillyTavern:
15
- - It advance the story slowly, responding to the last message quite nicely.
16
- - Creativity is good, sometimes surprised me with the similar response that I'd like to get.
17
- - It may skips time when the last message includes word(s) that resembles a promise (or literally time).
18
- - Sometimes respond with a long response, but it's kinda adapt to the overall roleplay message, i think...
19
 
20
 
21
  ## Prompt and Preset
22
 
23
- **ChatML** works best so far. **Llama3** and **Mistral** prompts work, but sometimes speaks for you. (ChatML may speak for you, but not that often, just re-generate.)
24
 
25
  I use context and instruct from **[here](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/ChatML/v1.9)** (Credits to **[Virt-io](https://huggingface.co/Virt-io)**.)
26
 
27
- **[This](https://huggingface.co/RozGrov/NemoDori-v0.1-12B-MS/blob/main/NemoDori%20v0.1%20ST%20Preset.json)** is the preset I use for SillyTavern, it should be good enough.
28
- Tweak to your hearts content:
29
  - **temp** can go higher (i stopped at 2),
30
- - **skip special tokens** may or may not needed. If it respond with "assistant" or "user" at the end, **disable** the checkbox, it should matters. (i did get it from my first couple of tries, but now, no more. i dunno man...)
31
- - **context length** so far still coherence at **28k tokens**, from my own testing.
32
  - everything else is... just fine, as long as you're not forcing it.
33
 
34
 
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
  ---
9
 
10
  # NemoDori-v0.1-12B-MS
 
14
  This is my 'first' merge model, just for testing purpose. I don't know what I'm doing, honestly...
15
 
16
  My experience using this in SillyTavern:
17
+ - It advances the story slowly, responding to the last message quite nicely.
18
+ - Creativity is good, sometimes surprising me with a similar response that I'd like to get.
19
+ - It may skip time when the last message includes word(s) that resemble a promise (or literally time).
20
+ - Sometimes it responds with a long response, but it's kind of adapted to the overall roleplay, i think...
21
 
22
 
23
  ## Prompt and Preset
24
 
25
+ **ChatML** works best so far. **Llama3** and **Mistral** prompts work, but sometimes they speak for you. (ChatML may also speak for you, but not that often - simply re-generate.)
26
 
27
  I use context and instruct from **[here](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/ChatML/v1.9)** (Credits to **[Virt-io](https://huggingface.co/Virt-io)**.)
28
 
29
+ **[This](https://pastebin.com/4jSq8V4N)** is the preset I use for SillyTavern, it should be good enough.
30
+ Tweak to your heart's content:
31
  - **temp** can go higher (i stopped at 2),
32
+ - **skip special tokens** may or may not be needed. If it responds with "assistant" or "user" at the end, try **disabling** the checkbox. (i did get it in my first couple of tries, but now, no more. not sure why...)
33
+ - **context length** so far still coherence at **28k tokens**, based on my own testing.
34
  - everything else is... just fine, as long as you're not forcing it.
35
 
36