metadata
base_model:
- HuggingFaceTB/SmolVLM-Instruct
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- mlx
NexaAI/SmolVLM-Instruct-8bit-MLX
Use with mlx
Quickstart
Run them directly with nexa-sdk installed In nexa-sdk CLI:
NexaAI/SmolVLM-Instruct-8bit-MLX
Overview
SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs. Designed for efficiency, SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images, or function as a pure language model without visual inputs. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks.
Model Summary
- Developed by: Hugging Face 🤗
- Model type: Multi-modal model (image+text)
- Language(s) (NLP): English
- License: Apache 2.0
- Architecture: Based on Idefics3 (see technical summary)
Benchmark Results
Model | MMMU (val) | MathVista (testmini) | MMStar (val) | DocVQA (test) | TextVQA (val) | Min GPU RAM required (GB) |
---|---|---|---|---|---|---|
SmolVLM | 38.8 | 44.6 | 42.1 | 81.6 | 72.7 | 5.02 |
Qwen-VL 2B | 41.1 | 47.8 | 47.5 | 90.1 | 79.7 | 13.70 |
InternVL2 2B | 34.3 | 46.3 | 49.8 | 86.9 | 73.4 | 10.52 |
PaliGemma 3B 448px | 34.9 | 28.7 | 48.3 | 32.2 | 56.0 | 6.72 |
moondream2 | 32.4 | 24.3 | 40.3 | 70.5 | 65.2 | 3.87 |
MiniCPM-V-2 | 38.2 | 39.8 | 39.1 | 71.9 | 74.1 | 7.88 |
MM1.5 1B | 35.8 | 37.2 | 0.0 | 81.0 | 72.5 | NaN |
Reference
Original model card: HuggingFaceTB/SmolVLM-Instruct