TheBloke commited on
Commit
a78bb39
1 Parent(s): c9024c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -38,13 +38,13 @@ quantized_by: TheBloke
38
  <!-- header end -->
39
 
40
  # Athena V2 - AWQ
41
- - Model creator: [IkariDev](https://huggingface.co/IkariDev)
42
  - Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
43
 
44
  <!-- description start -->
45
  ## Description
46
 
47
- This repo contains AWQ model files for [IkariDev's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
48
 
49
 
50
  ### About AWQ
@@ -59,7 +59,7 @@ It is also now supported by continuous batching server [vLLM](https://github.com
59
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
60
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
61
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
62
- * [IkariDev's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
63
  <!-- repositories-available end -->
64
 
65
  <!-- prompt-template start -->
@@ -258,7 +258,7 @@ And thank you again to a16z for their generous grant.
258
 
259
  <!-- footer end -->
260
 
261
- # Original model card: IkariDev's Athena V2
262
 
263
 
264
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/y9gdW2923RkORUxejcLVL.png)
 
38
  <!-- header end -->
39
 
40
  # Athena V2 - AWQ
41
+ - Model creator: [IkariDev and Undi95](https://huggingface.co/IkariDev)
42
  - Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
43
 
44
  <!-- description start -->
45
  ## Description
46
 
47
+ This repo contains AWQ model files for [IkariDev and Undi95's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
48
 
49
 
50
  ### About AWQ
 
59
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
60
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
61
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
62
+ * [IkariDev and Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
63
  <!-- repositories-available end -->
64
 
65
  <!-- prompt-template start -->
 
258
 
259
  <!-- footer end -->
260
 
261
+ # Original model card: IkariDev and Undi95's Athena V2
262
 
263
 
264
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/y9gdW2923RkORUxejcLVL.png)