https://huggingface.co/nvidia/Llama-3_1-Nemotron-51B-Instruct

#306
by Pomni - opened

i've seen a benchmark of this in the lm studio server and apparently it's comparable to a 70b model. would like to try it out (i started using the downstairs living room pc which has WAY better specs and an AVX2 cpu over my main AVX pc)

well, let's see if it is supported by llama.cpp. i am a bit skeptical...

yeah, unfortunately:

ERROR:hf-to-gguf:Model DeciLMForCausalLM is not supported

mradermacher changed discussion status to closed

Sign up or log in to comment