llama4B / README.md
tankiit's picture
ff810b175db27f78007f8e49f76e7b53f4f9963828931b0856fac8f469337716
41e7471 verified
|
raw
history blame
780 Bytes
metadata
base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base
datasets:
  - teknium/OpenHermes-2.5
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
pipeline_tag: text-generation
tags:
  - mlx

psx7/llama4B

The Model psx7/llama4B was converted to MLX format from rasyosef/Llama-3.1-Minitron-4B-Chat using mlx-lm version 0.18.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("psx7/llama4B")
response = generate(model, tokenizer, prompt="hello", verbose=True)