TheBloke commited on
Commit
a862856
1 Parent(s): cf75555

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -33,11 +33,9 @@ They were produced by downloading the PTH files from Meta, and then converting t
33
 
34
  Command to convert was:
35
  ```
36
- python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 13B --output_dir /workspace/process/llama-2-13b/source --safe_serialization true
37
  ```
38
 
39
- The files were saved in Safetensors format.
40
-
41
  ## Repositories available
42
 
43
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ)
 
33
 
34
  Command to convert was:
35
  ```
36
+ python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 13B --output_dir /workspace/process/llama-2-13b/source
37
  ```
38
 
 
 
39
  ## Repositories available
40
 
41
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ)