Llama-2-7b-chat-mlx / README.md
pcuenq's picture
pcuenq HF staff
float16
7207004
|
raw
history blame
637 Bytes
metadata
pipeline_tag: text-generation
inference: false
tags:
  - facebook
  - meta
  - llama
  - llama-2
  - mlx

Llama 2

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, in npz format suitable for use in Apple's MLX framework.

Weights have been converted to float16 from the original bfloat16 type, because numpy is not compatible with bfloat16 out of the box.

Please, refer to the original model card for details on Llama 2.