language: | |
- code | |
tags: | |
- generated_from_trainer | |
- code | |
- coding | |
- gemma | |
- mlx | |
datasets: | |
- HuggingFaceH4/CodeAlpaca_20K | |
license_name: gemma-terms-of-use | |
license_link: https://ai.google.dev/gemma/terms | |
thumbnail: https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png | |
pipeline_tag: text-generation | |
model-index: | |
- name: gemma-2b-coder | |
results: [] | |
# mlx-community/gemma-2b-coder | |
This model was converted to MLX format from [`MAISAAI/gemma-2b-coder`](). | |
Refer to the [original model card](https://huggingface.co/MAISAAI/gemma-2b-coder) for more details on the model. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("mlx-community/gemma-2b-coder") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |