24bean commited on
Commit
d859711
1 Parent(s): bf85d7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -30,13 +30,13 @@ pip3 install huggingface-hub>=0.17.1
30
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
31
 
32
  ```shell
33
- huggingface-cli download 24bean/Llama-2-7B-ko-GGUF llama-2-ko-7b_q8_0.gguf --local-dir . --local-dir-use-symlinks False
34
  ```
35
 
36
  Or you can download llama-2-ko-7b.gguf, non-quantized model by
37
 
38
  ```shell
39
- huggingface-cli download 24bean/Llama-2-7B-ko-GGUF llama-2-ko-7b.gguf --local-dir . --local-dir-use-symlinks False
40
  ```
41
 
42
  ## Example `llama.cpp` command
 
30
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
31
 
32
  ```shell
33
+ huggingface-cli download 24bean/Llama-2-ko-7B-GGUF llama-2-ko-7b_q8_0.gguf --local-dir . --local-dir-use-symlinks False
34
  ```
35
 
36
  Or you can download llama-2-ko-7b.gguf, non-quantized model by
37
 
38
  ```shell
39
+ huggingface-cli download 24bean/Llama-2-ko-7B-GGUF llama-2-ko-7b.gguf --local-dir . --local-dir-use-symlinks False
40
  ```
41
 
42
  ## Example `llama.cpp` command