File size: 1,604 Bytes
36d8bbd
 
 
 
 
 
ed66c38
 
 
 
3650027
75c4a19
 
3650027
 
 
196a0e7
 
36d8bbd
 
 
 
 
 
abdfc21
3650027
 
ed66c38
3650027
75c4a19
ed66c38
 
 
 
3650027
 
ed66c38
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: other
language:
- en
---

[EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).

Other quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ)





## Model details

Base Perplexity : 5.7447

| **Branch**                                                           | **bits** | **Perplexity** | **Description**                                             |
|----------------------------------------------------------------------|----------|----------------|-------------------------------------------------------------|
| [3bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/3bit) | 3.73     | 5.8251         | Low bits quant while still good                             |
| [4bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/4bit) | 4.33     | 5.7784         | can go 6K context on T4 GPU                                 |
| [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5.33     | 5.7427         | 4k Context on T4 GPU (recommended if you use Google Colab)  |
| [6bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/6bit) | 6.13     | 5.7347         | For those who want better quality and capable of running it |

## Prompt Format

Alpaca format:
```
### Instruction:





### Response:
```