R136a1 commited on
Commit
75c4a19
1 Parent(s): d5233e5

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +13 -13
  2. output.safetensors +3 -0
README.md CHANGED
@@ -3,29 +3,29 @@ license: other
3
  language:
4
  - en
5
  ---
6
- [EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
7
 
8
- Other quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ)
 
 
 
 
9
 
10
  ## Model details
11
 
12
- | Branch | Bits | Perplexity | Desc |
13
- |----------------------------------------------------------------------|------|------------|---------------------------------------------------------|
14
- | [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5 | 6.1018 | Up to 6144 context size on T4 GPU |
15
- | [6bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/6bit) | 6 | 6.1182 | 4096 context size (tokens) on T4 GPU |
16
- | [3bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/3bit) | 3 | 6.3666 | Low bits quant while still good |
17
- | [4bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/4bit) | 4 | 6.1601 | Slightly better than 4bit GPTQ, ez 8K context on T4 GPU |
18
- | - | 7 | 6.1056 | 2048 max context size for T4 GPU |
19
- | - | 8 | 6.1027 | Just, why? |
20
 
21
- I'll upload the 7 and 8 bits quant if someone request it. (Idk y the 5 bits quant preplexity is lower than higher bits quant, I think I did something wrong?)
22
 
23
  ## Prompt Format
24
 
25
- Alpaca format:
26
  ```
27
- ### Instruction:
28
 
 
 
 
29
 
30
  ### Response:
31
  ```
 
3
  language:
4
  - en
5
  ---
6
+ An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
7
 
8
+ The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
9
+
10
+ This model is proficient at both roleplaying and storywriting due to its unique nature.
11
+
12
+ Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) (You're the best!)
13
 
14
  ## Model details
15
 
16
+ The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
 
 
 
 
 
 
 
17
 
18
+ This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
19
 
20
  ## Prompt Format
21
 
22
+ This model primarily uses Alpaca formatting, so for optimal model performance, use:
23
  ```
24
+ <System prompt/Character Card>
25
 
26
+ ### Instruction:
27
+ Your instruction or question here.
28
+ For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
29
 
30
  ### Response:
31
  ```
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c076038892d494598d62098052125a4cf4df44de9f05a1045b844a0dc513b10
3
+ size 8957184464