Edit model card

This is mistralai/Mixtral-8x7B-Instruct-v0.1, converted to GGUF and quantized to q8_0. Both the model and the embedding/output tensors are q8_0.

The model is split using the llama.cpp/llama-gguf-split cli utility into shards no larger than 1GB. The purpose of this is to make it less painful to resume downloading if interrupted.

This is uploaded pretty much just as a personal backup. Mixtral Instruct is one of my favorite models.

All operations are done with llama.cpp commit 8cd1bcfd3fc9f2b5cbafd7fb7581b3278acec25fz (2024-08-11).

Downloads last month
3
GGUF
Model size
46.7B params
Architecture
llama

8-bit

Inference API
Unable to determine this model's library. Check the docs .