Llama-3.1-Storm-8B / README.md
voxmenthe's picture
be5af3b1d8539e5c1f5ef82f243bb85c5e3cec6dec11603d54ffc8f2fd84906c
7485f80 verified
|
raw
history blame
771 Bytes
metadata
language:
  - en
  - de
  - fr
  - it
  - pt
  - hi
  - es
  - th
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
  - llama-3.1
  - conversational
  - instruction following
  - reasoning
  - function calling
  - mergekit
  - finetuning
  - axolotl
  - mlx

voxmenthe/Llama-3.1-Storm-8B

The Model voxmenthe/Llama-3.1-Storm-8B was converted to MLX format from akjindal53244/Llama-3.1-Storm-8B using mlx-lm version 0.17.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("voxmenthe/Llama-3.1-Storm-8B")
response = generate(model, tokenizer, prompt="hello", verbose=True)