--- base_model: - rAIfle/Acolyte-22B quantized_by: Brioch base_model_relation: quantized pipeline_tag: text-generation --- 6.5 bpw EXL2 quant of [Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B) --- # Acolyte-22B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6569a4ed2419be6072890cf8/3dcGMcrWK2-2vQh9QBt3o.png) LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. Check the [LoRA](https://huggingface.co/rAIfle/Acolyte-LORA) for dataset info. Use `Mistral V2 & V3` template.