gemma-2-9b-it-SimPO / README.md
voxmenthe's picture
a9541cbf607ee3eee1bc2d3c1cdc5644e1facfb61e73b29ee87c491adf83228c
30e0c5f verified
|
raw
history blame
765 Bytes
---
base_model: google/gemma-2-9b-it
datasets:
- princeton-nlp/gemma2-ultrafeedback-armorm
license: mit
tags:
- alignment-handbook
- generated_from_trainer
- mlx
model-index:
- name: princeton-nlp/gemma-2-9b-it-SimPO
results: []
---
# mlx-community/gemma-2-9b-it-SimPO
The Model [mlx-community/gemma-2-9b-it-SimPO](https://huggingface.co/mlx-community/gemma-2-9b-it-SimPO) was converted to MLX format from [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) using mlx-lm version **0.18.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/gemma-2-9b-it-SimPO")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```