bhenrym14 commited on
Commit
8c0ae29
1 Parent(s): 5287dd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ Pretraining took 10 hours. Fine-tuning took ~41 hours on 1x RTX 6000 Ada.
24
 
25
  The easiest way is to use the GPTQ weights (linked above) with [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) and ExLlama. You'll need to set max_seq_len to 16384 and compress_pos_emb to 8.
26
 
27
- **IMPORTANT: To use these weights with HF transformers you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch.py)**
28
 
29
  I have had issues with going beyond 8192 tokens with exllama. I have not tested that with this model. YMMV
30
 
 
24
 
25
  The easiest way is to use the GPTQ weights (linked above) with [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) and ExLlama. You'll need to set max_seq_len to 16384 and compress_pos_emb to 8.
26
 
27
+ **IMPORTANT: To use these weights with autoGPTQ or GPTQ-for-LLama you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch-16k.py)**
28
 
29
  I have had issues with going beyond 8192 tokens with exllama. I have not tested that with this model. YMMV
30