Custom GGUF quants of Mistral-Nemo-Instruct-2407, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. 🧠🔥🚀