File size: 1,457 Bytes
eb721ba ec7ba65 eb721ba 2a20133 eb721ba 0121bbd eb721ba 165a631 eb721ba d7cc879 91e9e79 eb721ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
license: apache-2.0
base_model: alpindale/WizardLM-2-8x22B
---
# SorcererLM-8x22b-bf16
Oh boy, here we go. Low-rank (`r=16, alpha=32`) LoRA on top of [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), trained on 2 epochs of (cleaned & deduped) c2-logs. As far as I can tell, this is an upgrade from `WizardLM-2-8x22B` for RP purposes.
## Why A LoRA?
The choice was fully intentional. I briefly considered a FFT but for this particular use-case a LoRA seemed a better fit. `WizardLM-2-8x22B` is smart by itself but its used vocabulary leaves much to be desired when it comes to RP. By training a low-rank LoRA on top of it to teach it some of Claude's writing style, we remedy that.
## Prompting
- Use the templates in [Quant-Cartel/Recommended-Settings](https://huggingface.co/Quant-Cartel/Recommended-Settings) under the `SorcererLM`-folder.
- Or Vicuna 1.1 and a sane context template. It's somewhat sensitive to samplers, I'd recommend Temperature 1, MinP 0.05 and a dash of DRY but YMMV. Shorter prompts seem to work better, too.
## Acknowledgments
- My [Cartel](https://huggingface.co/Quant-Cartel) bros, [Envoid](https://huggingface.co/Envoid) and especially [I^2](https://huggingface.co/InferenceIllusionist), for being amazing.
- My wallet for making sure I could do this without starving.
## Training
Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe). Configs included in the `train`-subfolder.
|