How much memory does this model require?

#1
by wangdafa - opened

I used two 4090 to run this model and still OOM。

python -m fastchat.serve.vllm_worker --model-path TheBloke/Nous-Capybara-34B-AWQ --trust-remote-code --tensor-parallel-size 2 --quantization awq  --dtype auto

You should limit the context length to 8192, otherwise it will try to load 200K context length which roughly need extra 40G vram

You should limit the context length to 8192, otherwise it will try to load 200K context length which roughly need extra 40G vram

Thank you for your answer. The problem has been resolved

But I met a new problem, that is, vllm seems to not support the template of Nous-Capybara-34B, even if i specify --conv-template manticore. Is this a bug of vllm?

@wangdafa

https://github.com/lm-sys/FastChat/blob/e53c73f22efa9a37bf76af8783c96049276a2e98/fastchat/conversation.py#L789

I don't think manticore has system prompt, maybe that was the cause. Try Vicuna or airoboros?

wangdafa changed discussion status to closed

Sign up or log in to comment