apepkuss79
commited on
Commit
•
5aaaaae
1
Parent(s):
2825481
Update README.md
Browse files
README.md
CHANGED
@@ -66,7 +66,7 @@ tags:
|
|
66 |
--ctx-size 131072
|
67 |
```
|
68 |
|
69 |
-
## Quantized GGUF Models
|
70 |
|
71 |
| Name | Quant method | Bits | Size | Use case |
|
72 |
| ---- | ---- | ---- | ---- | ----- |
|
@@ -92,6 +92,6 @@ tags:
|
|
92 |
| [Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
93 |
| [Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
94 |
| [Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
95 |
-
| [Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 17.3 GB| |
|
96 |
|
97 |
*Quantized with llama.cpp b3751*
|
|
|
66 |
--ctx-size 131072
|
67 |
```
|
68 |
|
69 |
+
<!-- ## Quantized GGUF Models
|
70 |
|
71 |
| Name | Quant method | Bits | Size | Use case |
|
72 |
| ---- | ---- | ---- | ---- | ----- |
|
|
|
92 |
| [Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
93 |
| [Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
94 |
| [Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 32.1 GB| |
|
95 |
+
| [Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2.5-72B-Instruct-GGUF/blob/main/Qwen2.5-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 17.3 GB| | -->
|
96 |
|
97 |
*Quantized with llama.cpp b3751*
|