Edit model card
  e88 88e                               d8     
 d888 888b  8888 8888  ,"Y88b 888 8e   d88     
C8888 8888D 8888 8888 "8" 888 888 88b d88888   
 Y888 888P  Y888 888P ,ee 888 888 888  888     
  "88 88"    "88 88"  "88 888 888 888  888     
      b                                        
      8b,                                      
 
  e88'Y88                  d8           888    
 d888  'Y  ,"Y88b 888,8,  d88    ,e e,  888    
C8888     "8" 888 888 "  d88888 d88 88b 888    
 Y888  ,d ,ee 888 888     888   888   , 888    
  "88,d88 "88 888 888     888    "YeeP" 888    
                                               
PROUDLY PRESENTS         

SorcererLM-22B-exl2-longcal

Quantized using 115 rows of 8192 tokens from the default ExLlamav2-calibration dataset.

Branches:

  • main -- measurement.json
  • 8.0b8h -- 8.0bpw, 8bit lm_head
  • 6.0b6h -- 6.0bpw, 6bit lm_head
  • 5.0b6h -- 5.0bpw, 6bit lm_head
  • 4.0b6h -- 4.0bpw, 6bit lm_head
  • 3.5b6h -- 3.5bpw, 6bit lm_head
  • 2.25b6h -- 2.25bpw, 6bit lm_head

Original model link: InferenceIllusionist/SorcererLM-22B

Original model README below.


SorcererLM-22B

Because good things always come in threes!

SorcererLM-22B is here, rounding out the trinity of Mistral-Small-Instruct tunes from the Quant Cartel.

Prompt Format

Quantized Versions

Training

For starters this is a LORA tune on top of Mistral-Small-Instruct-2409 and not a pruned version of SorcererLM-8x22b.

Trained with a whole lot of love on 1 epoch of cleaned and deduped c2 logs. This model is 100% 'born-local', the result of roughly 27 hours and a little bit of patience on a single RTX 4080 SUPER.

As hyperparameters and dataset intentionally mirror ones used in the original Sorcerer 8x22b tune, this is considered its 'lite' counterpart aiming to provide the same bespoke conversational experience relative to its size and reduced hardware requirements.

While all three share the same Mistral-Small-Instruct base, in contrast to its sisters Mistral-Small-NovusKyver and Acolyte-22B this release did not SLERP the resulting model with the original in a 50/50 ratio post-training. Instead, alpha was dropped when the lora was merged with full precision weights in the final step.

Acknowledgments

  • First and foremost a huge thank you my brilliant teammates envoid and rAIfle. Special shout-out to rAIfle for critical last minute advice that got this one through the finish line
  • Props to unsloth as well for helping make this local tune possible
  • And of course, none of this would matter without users like you. Thank you :)

Safety

...

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Quant-Cartel/SorcererLM-22B-exl2-longcal

Quantized
(4)
this model