This is the 7b Qwen2-VL image model exported via https://github.com/pdufour/llm-export.

Also see https://huggingface.co/pdufour/Qwen2-VL-2B-Instruct-ONNX-Q4-F16 for a 2b model that is onnxruntime-webgpu compatible.

Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for pdufour/Qwen2-VL-7B-Instruct-onnx

Base model

Qwen/Qwen2-VL-7B
Quantized
(52)
this model

Collection including pdufour/Qwen2-VL-7B-Instruct-onnx