Edit model card

Sloppier-Wingman-Alternative-8x7B-hf

Sloppier-Nasty-Wingman

Alternative to rAIfle/Sloppy-Wingman-8x7B-hf. Second part of the merge has a bit of difference compared to the other one. I, personally, still prefer ChatML on this one, but Alpaca and/or Mistral-formats ought to work regardless.

models:
  - model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
    parameters:
      weight: 0.33
  - model: mistralai/Mixtral-8x7B-v0.1+wandb/Mixtral-8x7b-Remixtral
    parameters:
      weight: 0.33
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-v0.1
dtype: float16

and

models:
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
    parameters:
      weight: 0.85
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1+SeanWu25/Mixtral_8x7b_Medicine
    parameters:
      weight: 0.33
  - model: notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
    parameters:
      weight: 0.25
merge_method: task_arithmetic
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
dtype: float16

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • ./02.5-pal-instruct
  • ./01-pal-base

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./01-pal-base
  - model: ./02.5-pal-instruct
merge_method: slerp
base_model: ./01-pal-base
parameters:
  t:
    - value: 0.66
dtype: float16
Downloads last month
7
Safetensors
Model size
46.7B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rAIfle/Sloppier-Wingman-Alternative-8x7B-hf

Quantizations
1 model

Collection including rAIfle/Sloppier-Wingman-Alternative-8x7B-hf