MLX
Safetensors
llama
Not-For-All-Audiences
nsfw
8-bit precision
morgul's picture
64516f33c1db1b2649986b11cc5922ad111d4145bcdeb60eedcdc0733613b53f
7dc205a verified
|
raw
history blame
945 Bytes
metadata
base_model: NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS
license: cc-by-nc-4.0
tags:
  - not-for-all-audiences
  - nsfw
  - mlx

mlx-community/Lumimaid-70B-v0.1-OAS

The Model mlx-community/Lumimaid-70B-v0.1-OAS was converted to MLX format from NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS using mlx-lm version 0.19.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Lumimaid-70B-v0.1-OAS")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)