Use Qwen2VLImageProcessor for image_processor_type

#2

The issue that this PR fixes is similar to what was discussed here: https://github.com/huggingface/transformers/issues/36193

NM Testing org

Thanks for the PR, I didn't know about this issue but it makes sense given the current focus on fast image processors. Would it be simpler to just remove image_processor_type entirely and rely on HF to set the latest default if it changes again? I can apply the right change to our other models

Thanks Michael for the quick reply!

I like your idea. I just tested removing image_processor_type and ran the model via vLLM and it worked.

Would you like me to make the change in this PR?

mgoin changed pull request status to closed

Sign up or log in to comment