bartowski commited on
Commit
f8336d0
1 Parent(s): f3be757

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -193,7 +193,7 @@ quantized_by: bartowski
193
  <b>This conversion is based on the merged Llama 3 support in llama.cpp (release b2710)</b>
194
 
195
  ### Known working on:
196
- - LM Studio 0.2.20
197
  - koboldcpp 1.63
198
 
199
  ### Confirmed not working on (as of April 21):
@@ -201,6 +201,8 @@ quantized_by: bartowski
201
 
202
  Any others unknown, feel free to comment
203
 
 
 
204
  Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2710">b2710</a> for quantization.
205
 
206
  Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
 
193
  <b>This conversion is based on the merged Llama 3 support in llama.cpp (release b2710)</b>
194
 
195
  ### Known working on:
196
+ - LM Studio 0.2.20*
197
  - koboldcpp 1.63
198
 
199
  ### Confirmed not working on (as of April 21):
 
201
 
202
  Any others unknown, feel free to comment
203
 
204
+ *: LM Studio 0.2.20 seems to work on Mac, but not on Windows, test and verify for yourself to see if this is the right version to use
205
+
206
  Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2710">b2710</a> for quantization.
207
 
208
  Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct