Jacoby746 commited on
Commit
984cca0
1 Parent(s): 530d189

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -41
README.md CHANGED
@@ -1,41 +1,42 @@
1
- ---
2
- base_model: []
3
- tags:
4
- - mergekit
5
- - merge
6
-
7
- ---
8
- # merge
9
-
10
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
11
-
12
- ## Merge Details
13
- ### Merge Method
14
-
15
- This model was merged using the SLERP merge method.
16
-
17
- ### Models Merged
18
-
19
- The following models were included in the merge:
20
- * C:/Users/Jacoby/Downloads/text-generation-webui-main/models/uukuguy_speechless-instruct-mistral-7b-v0.2
21
- * C:/Users/Jacoby/Downloads/text-generation-webui-main/models/SanjiWatsuki_Kunoichi-DPO-v2-7B
22
-
23
- ### Configuration
24
-
25
- The following YAML configuration was used to produce this model:
26
-
27
- ```yaml
28
-
29
- models:
30
- - model: C:/Users/Jacoby/Downloads/text-generation-webui-main/models/SanjiWatsuki_Kunoichi-DPO-v2-7B
31
- - model: C:/Users/Jacoby/Downloads/text-generation-webui-main/models/uukuguy_speechless-instruct-mistral-7b-v0.2
32
- merge_method: slerp
33
- base_model: C:/Users/Jacoby/Downloads/text-generation-webui-main/models/uukuguy_speechless-instruct-mistral-7b-v0.2
34
- parameters:
35
- t:
36
- - value: [0.5, 0.1, 0.4, 0.3, 0.5, 0.7, 0.5, 0.7, 0.2, 0.1, 0.4] # Preserving the first and last layers of Miqu untouched is key for good results
37
- embed_slerp: true # This is super important otherwise the merge will fail
38
- dtype: float16
39
-
40
-
41
- ```
 
 
1
+ ---
2
+ base_model:
3
+ - uukuguy/speechless-instruct-mistral-7b-v0.2
4
+ - SanjiWatsuki/Kunoichi-DPO-v2-7B
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+ ---
9
+ # merge
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## Merge Details
14
+ ### Merge Method
15
+
16
+ This model was merged using the SLERP merge method.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * ../models/uukuguy_speechless-instruct-mistral-7b-v0.2
22
+ * ../models/SanjiWatsuki_Kunoichi-DPO-v2-7B
23
+
24
+ ### Configuration
25
+
26
+ The following YAML configuration was used to produce this model:
27
+
28
+ ```yaml
29
+
30
+ models:
31
+ - model: ../models/SanjiWatsuki_Kunoichi-DPO-v2-7B
32
+ - model: ../models/uukuguy_speechless-instruct-mistral-7b-v0.2
33
+ merge_method: slerp
34
+ base_model: ../models/uukuguy_speechless-instruct-mistral-7b-v0.2
35
+ parameters:
36
+ t:
37
+ - value: [0.5, 0.1, 0.4, 0.3, 0.5, 0.7, 0.5, 0.7, 0.2, 0.1, 0.4]
38
+ embed_slerp: true
39
+ dtype: float16
40
+
41
+
42
+ ```