Quantization help?

#4
by Daemontatox - opened

Cant seem to be able to quantize a finetuned gemma2 2b model .

Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: minigemmy\nINFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only\nINFO:hf-to-gguf:Exporting model...\nINFO:hf-to-gguf:gguf: loading model weight map from 'pytorch_model.bin.index.json'\nINFO:hf-to-gguf:gguf: loading model part 'pytorch_model-00001-of-00002.bin'\nINFO:hf-to-gguf:token_embd.weight, torch.float16 --> F16, shape = {2304, 256000}\nINFO:hf-to-gguf:blk.0.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.0.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.0.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.0.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.0.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.0.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.0.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.0.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.0.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.1.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.1.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.1.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.1.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.1.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.1.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.1.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.1.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.1.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.2.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.2.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.2.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.2.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.2.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.2.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.2.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.2.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.2.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.3.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.3.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.3.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.3.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.3.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.3.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.3.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.3.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.3.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.4.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.4.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.4.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.4.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.4.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.4.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.4.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.4.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.4.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.5.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.5.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.5.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.5.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.5.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.5.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.5.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.5.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.5.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.6.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.6.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.6.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.6.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.6.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.6.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.6.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.6.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.6.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.7.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.7.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.7.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.7.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.7.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.7.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.7.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.7.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.7.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.8.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.8.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.8.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.8.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.8.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.8.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.8.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.8.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.8.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.9.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.9.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.9.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.9.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.9.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.9.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.9.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.9.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.9.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.10.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.10.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.10.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.10.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.10.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.10.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.10.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.10.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.10.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.11.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.11.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.11.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.11.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.11.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.11.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.11.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.11.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.11.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.12.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.12.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.12.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.12.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.12.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.12.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.12.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.12.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.12.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.13.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.13.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.13.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.13.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.13.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.13.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.13.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.13.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.13.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.14.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.14.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.14.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.14.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.14.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.14.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.14.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.14.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.14.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.15.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.15.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.15.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.15.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.15.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.15.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.15.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.15.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.15.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.16.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.16.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.16.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.16.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.16.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.16.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.16.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.16.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.16.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.17.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.17.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.17.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.17.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.17.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.17.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.17.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.17.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.17.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.18.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.18.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.18.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.18.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.18.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.18.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.18.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.18.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.18.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.19.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.19.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.19.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.19.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.19.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.19.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.19.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.19.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.19.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.20.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.20.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.20.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.20.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.20.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.20.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.20.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.20.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.20.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.21.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.21.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.21.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.21.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.21.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.21.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.21.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.21.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.21.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.22.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.22.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.22.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.22.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.22.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.22.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.22.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.22.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.22.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.23.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.23.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.23.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.23.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.23.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.23.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.23.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.23.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.23.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.24.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.24.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.24.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.24.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:gguf: loading model part 'pytorch_model-00002-of-00002.bin'\nINFO:hf-to-gguf:blk.24.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.24.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.24.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.24.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.24.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.25.attn_q.weight, torch.float16 --> F16, shape = {2304, 2048}\nINFO:hf-to-gguf:blk.25.attn_k.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.25.attn_v.weight, torch.float16 --> F16, shape = {2304, 1024}\nINFO:hf-to-gguf:blk.25.attn_output.weight, torch.float16 --> F16, shape = {2048, 2304}\nINFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.25.ffn_up.weight, torch.float16 --> F16, shape = {2304, 9216}\nINFO:hf-to-gguf:blk.25.ffn_down.weight, torch.float16 --> F16, shape = {9216, 2304}\nINFO:hf-to-gguf:blk.25.attn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.25.post_attention_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:blk.25.post_ffw_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:output_norm.weight, torch.float16 --> F32, shape = {2304}\nINFO:hf-to-gguf:Set meta model\nINFO:hf-to-gguf:Set model parameters\nINFO:hf-to-gguf:Set model tokenizer\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4067, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 4061, in main\n model_instance.write()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 391, in write\n self.prepare_metadata(vocab_only=False)\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 384, in prepare_metadata\n self.set_vocab()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 2672, in set_vocab\n self._set_vocab_sentencepiece()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 692, in _set_vocab_sentencepiece\n tokens, scores, toktypes = self._create_vocab_sentencepiece()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 709, in _create_vocab_sentencepiece\n raise FileNotFoundError(f"File not found: {tokenizer_path}")\nFileNotFoundError: File not found: minigemmy/tokenizer.model\n'

think it just means you need the author to upload the tokenizer.model file, it needs to be present for Gemma models

Alright thanks.

Daemontatox changed discussion status to closed

Sign up or log in to comment