apepkuss79 commited on
Commit
2825481
1 Parent(s): 82ce2bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -79,8 +79,8 @@ tags:
79
  | [Qwen2.5-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
80
  | [Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
81
  | [Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 18 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
82
- | [Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 32.2 GB| large, very low quality loss - recommended |
83
- | [Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 22.3 GB| large, very low quality loss - recommended |
84
  | [Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
85
  | [Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
86
  | [Qwen2.5-72B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss |
 
79
  | [Qwen2.5-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss |
80
  | [Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
81
  | [Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 18 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
82
+ | [Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 29.9 GB| large, very low quality loss - recommended |
83
+ | [Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 24.6 GB| large, very low quality loss - recommended |
84
  | [Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
85
  | [Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended |
86
  | [Qwen2.5-72B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss |