mlx-community/deepseek-vl2-tiny-bf16

This model was converted to MLX format from prince-canuma/deepseek-vl2-tiny using mlx-vlm version 0.1.5. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/deepseek-vl2-tiny-bf16 --max-tokens 100 --temp 0.0
Downloads last month
47
Safetensors
Model size
3.37B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including mlx-community/deepseek-vl2-tiny-bf16