cogbuji's picture
Update README.md
9f056b5 verified
|
raw
history blame
966 Bytes
---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- mlx
datasets:
- teknium/OpenHermes-2.5
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: OpenHermes-2-Mistral-7B
results: []
---
# cogbuji/OpenHermes-2.5-Mistral-7B-mlx-4bit
This model was converted to MLX format from [teknium/OpenHermes-2.5-Mistral-7B](/teknium/OpenHermes-2.5-Mistral-7B) and quantized.
Refer to the [original model card](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) for more details on the model.
It was converted and quantized with mlx **0.7.0** and mlx_lm **0.3.0** and should be used with those versions. Later versions of these may deprecate this model
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cogbuji/OpenHermes-2.5-Mistral-7B-mlx-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```