---
license: apache-2.0
---
# Mistral-7B-Instruct-v0.3 quantized to 4bits
- weight-only quantization via GPTQ to 4bits
- GPTQ optimized for X% accuracy recovery relative to the unquantized model
# Open LLM Leaderboard evaluation scores
| | Mistral-7B-Instruct-v0.3 | Mistral-7B-Instruct-v0.3-GPTQ-4bit
(this model) |
| :------------------: | :----------------------: | :------------------------------------------------: |
| arc-c
25-shot | 63.48 | 63.40 |
| mmlu
5-shot | 61.13 | 60.89 |
| hellaswag
10-shot | 84.49 | ? gpu0 |
| winogrande
5-shot | 79.16 | 79.08 |
| gsm8k
5-shot | 43.37 | 45.41 |
| truthfulqa
0-shot | 59.65 | 57.48 |
| -------------------- | ------------------------ | -------------------------------------------------- |
| Average
Accuracy | 65.21 | x |
| Recovery | 100% | x |