Split
Can the Huge 21GB file be split to less than 10GB? around 5GB would be the best.
So that it doesn't give the following error:
'''inference api error: The model ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1 is too large to be loaded automatically (21GB > 10GB). '''
Thanks in Advance
yes agree please do that so i can up like this https://huggingface.co/spaces/llamameta/Qwen2.5-Coder-32B-Instruct-Chat-Assistant
We will shard the files into 4GB chunks asap.
4GB sharded tensors updated. You guys can redownload the model.
4GB sharded tensors updated. You guys can redownload the model.
Thank you. But still can't use inference error:
The model ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1 is too large to be loaded automatically (21GB > 10GB).
@llamameta Sharding does not reduce gpu vram usage. Free huggingface space only allow max 10GB (total sized) models. 32B even in 4bit form is a huge model. You can however run 32GB locally on your 24GB 4090 without issue.