Text Generation
GGUF
Inference Endpoints
imatrix
conversational
Epiculous commited on
Commit
448e772
1 Parent(s): cb23f8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -39,6 +39,9 @@ If you are using GGUF I strongly advise using ChatML, for some reason that quant
39
  [Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json) - Considered the best settings! <br/>
40
  [Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json)
41
 
 
 
 
42
  ## Training
43
  Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on Instruct data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on RP, and the new RP LoRA was applied to the modified base, resulting in what you see here.
44
 
 
39
  [Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json) - Considered the best settings! <br/>
40
  [Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json)
41
 
42
+ ### Tokenizer
43
+ If you are using SillyTavern, please set the tokenizer to API (WebUI/ koboldcpp)
44
+
45
  ## Training
46
  Training was done twice over 2 epochs each on two 2x [NVIDIA A6000 GPUs](https://www.nvidia.com/en-us/design-visualization/rtx-a6000/) using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on Instruct data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on RP, and the new RP LoRA was applied to the modified base, resulting in what you see here.
47