Model weights size clarification ?
Hi,
Thank you for your models ! I tried to get this one running on a single 24gb gpu using VLLM and got a CUDA OOM error. From my understanding I should still have plenty of free space, given we are working in 4 bit.
Upon reviewing your weights it appears as if the model weighs twice the expected size (should be roughly 26/2 = 13gb). Were the weights saved differently somehow (in float32 perhaps?), is there a way to upload .safetensors that correspond to the expected 4bit weight, or the .bin weights in float16 ? Thanks
Hi,
Thank you for your question! The observed model size is due to the fact that while the LLM component has been quantized to 4-bit precision, the Vision Transformer (ViT) part of the model remains at 16-bit precision. This results in the following size breakdown:
- LLM (20B): ~10GB (quantized to 4-bit)
- ViT (6B): ~12GB (at 16-bit precision)
In total, this adds up to approximately 22GB, which aligns with the size you're seeing. Unfortunately, the ViT component is not currently quantized, so the size is reflective of this mixed precision approach.
Thank you for your explanation!