Wrong Model Card and OSError

#5
by guptavarun - opened

Thanks for this work!

  1. Updating Model Name in the Model Card

Currently the instruction provided for the 72B-Qwen2-OV model is

pretrained = "lmms-lab/llava-onevision-qwen2-0.5b-si"
model_name = "llava_qwen"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map)  # Add any other thing you want to pass in llava_model_args

the 'pretrainedstring is wongly mentioned for0.5b-si` model?

  1. OsError
    Even if I change to
    pretrained = "pretrained = "lmms-lab/llava-onevision-qwen2-72b-ov"

I run into an OSError

OSError: Consistency check failed: file should be of size 4781670360 but has size 4559629983 (model-00012-of-00031.safetensors).
We are sorry for the inconvenience. Please retry with `force_download=True`.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.

I tried downloading this model twice, second time also with the force_download=True flag, but still got the same error.

Is the model problematic or my method?

Thanks.

Update
The OSError in my case, seemed to be a problematic network. I have resumed the download by resume_download=True flag, and the above safetensor (12/31) downloaded fine.

The model card though seems to be wrong still.

Sign up or log in to comment