prince-canuma's picture
cc1971b352f67f4dd90173dfb81b42898718150cb7c2a976fb710a06504a52f7
edbd9b8 verified
|
raw
history blame
748 Bytes
---
language:
- en
license: llama3
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- groq
- tool-use
- function-calling
- mlx
pipeline_tag: text-generation
---
# mlx-community/Llama-3-Groq-70B-Tool-Use-4bit
The Model [mlx-community/Llama-3-Groq-70B-Tool-Use-4bit](https://huggingface.co/mlx-community/Llama-3-Groq-70B-Tool-Use-4bit) was converted to MLX format from [Groq/Llama-3-Groq-70B-Tool-Use](https://huggingface.co/Groq/Llama-3-Groq-70B-Tool-Use) using mlx-lm version **0.15.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3-Groq-70B-Tool-Use-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```