Llama-3.2-11B-Vision-Instruct-GGUF

Sourced from Ollama.

The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.

Downloads last month
23,142
GGUF
Model size
895M params
Architecture
mllama

4-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for leafspark/Llama-3.2-11B-Vision-Instruct-GGUF

Quantized
(15)
this model