deepseek-coder-33b-instruct-hf-4bit-mlx

This model was converted to MLX format from deepseek-ai/deepseek-coder-33b-instruct. Refer to the original model card for more details on the model.

Use with mlx

pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/deepseek-coder-6.7b-instruct-hf-4bit-mlx --prompt "### Instruction: \nwrite a quick sort algorithm in python.\n### Response: \n"
Downloads last month
39
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.