Split

#2
by Cyperior - opened

Can the Huge 21GB file be split to less than 10GB? around 5GB would be the best.

So that it doesn't give the following error:
'''inference api error: The model ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1 is too large to be loaded automatically (21GB > 10GB). '''

Thanks in Advance

ModelCloud.AI org

We will shard the files into 4GB chunks asap.

ModelCloud.AI org

4GB sharded tensors updated. You guys can redownload the model.

4GB sharded tensors updated. You guys can redownload the model.

Thank you. But still can't use inference error:
The model ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1 is too large to be loaded automatically (21GB > 10GB).

ModelCloud.AI org

@llamameta Sharding does not reduce gpu vram usage. Free huggingface space only allow max 10GB (total sized) models. 32B even in 4bit form is a huge model. You can however run 32GB locally on your 24GB 4090 without issue.

Sign up or log in to comment