File size: 2,822 Bytes
937d228
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
edb4ff4
937d228
 
 
edb4ff4
937d228
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/DxZNdV33EVq6cK6_gwSqS.jpeg)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/sPI_QHGXE_egmQXTsYkld.png)

# Information
## Details
New merge of NeMo based models, thankfully this time with ChatML format. My goal was to create a smart and universal roleplaying model that is stable on higher contexts. So far seems to be better than my best Nemomix attempts, especially on the 64k+ context I've been using. All credits and thanks go to the amazing Gryphe, MistralAI, Anthracite, Sao10K and ShuttleAI for their amazing models.

## Instruct

ChatML but Mistral Instruct should work too (theoretically).

```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{message}<|im_end|>
<|im_start|>assistant
{response}<|im_end|>
```

## Parameters

I recommend running Temperature 1.0-1.2 with 0.1 Top A or 0.01-0.1 Min P, and with 0.8/1.75/2/0 DRY. Also works with lower Temperatures below 1.0. Nothing more needed.

### Settings

You can use my exact settings from here (use the ones from the ChatML Base/Customized folder): https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main.

## GGUF

https://huggingface.co/MarinaraSpaghetti/NemoRemix-12B-GGUF

# NemoRemix-v4.0-12B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details

### Merge Method

This model was merged using the della_linear merge method using F:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base.

### Models Merged

The following models were included in the merge:
* F:\mergekit\mistralaiMistral-Nemo-Instruct-2407
* F:\mergekit\Gryphe_Pantheon-RP-1.5-12b-Nemo
* F:\mergekit\shuttleai_shuttle-2.5-mini
* F:\mergekit\Sao10K_MN-12B-Lyra-v1
* F:\mergekit\anthracite-org_magnum-12b-v2

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: F:\mergekit\Gryphe_Pantheon-RP-1.5-12b-Nemo
    parameters:
      weight: 0.1
      density: 0.3
  - model: F:\mergekit\mistralaiMistral-Nemo-Instruct-2407
    parameters:
      weight: 0.12
      density: 0.4
  - model: F:\mergekit\Sao10K_MN-12B-Lyra-v1
    parameters:
      weight: 0.2
      density: 0.5
  - model: F:\mergekit\shuttleai_shuttle-2.5-mini
    parameters:
      weight: 0.25
      density: 0.6
  - model: F:\mergekit\anthracite-org_magnum-12b-v2
    parameters:
      weight: 0.33
      density: 0.8
merge_method: della_linear
base_model: F:\mergekit\mistralaiMistral-Nemo-Base-2407
parameters:
  epsilon: 0.05
  lambda: 1
dtype: bfloat16
```

# Ko-fi
## Enjoying what I do? Consider donating here, thank you!

https://ko-fi.com/spicy_marinara