File size: 780 Bytes
2c8f9a5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base
datasets:
- teknium/OpenHermes-2.5
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
pipeline_tag: text-generation
tags:
- mlx
---
# psx7/llama4B
The Model [psx7/llama4B](https://huggingface.co/psx7/llama4B) was converted to MLX format from [rasyosef/Llama-3.1-Minitron-4B-Chat](https://huggingface.co/rasyosef/Llama-3.1-Minitron-4B-Chat) using mlx-lm version **0.18.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("psx7/llama4B")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|