aya-23-8B-8bit / README.md
prince-canuma's picture
a3683bc12fac91c45ddf63bb98f9f79b596692e8f8d23b902b0f74c2a17654b1
c956fae verified
|
raw
history blame
680 Bytes
metadata
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - ja
  - ko
  - zh
  - ar
  - el
  - fa
  - pl
  - id
  - cs
  - he
  - hi
  - nl
  - ro
  - ru
  - tr
  - uk
  - vi
license: cc-by-nc-4.0
library_name: transformers
tags:
  - mlx

mlx-community/aya-23-8B-8bit

The Model mlx-community/aya-23-8B-8bit was converted to MLX format from CohereForAI/aya-23-8B using mlx-lm version 0.13.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/aya-23-8B-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)