mlx-community/pixtral-12b-8bit
This model was converted to MLX format from mistral-community/pixtral-12b
using mlx-vlm version 0.0.15.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/pixtral-12b-8bit --max-tokens 100 --temp 0.0
- Downloads last month
- 107
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support