Minor fixes
Browse files
README.md
CHANGED
@@ -18,7 +18,6 @@ Weights have been converted to `float16` from the original `bfloat16` type, beca
|
|
18 |
How to use with [MLX](https://github.com/ml-explore/mlx).
|
19 |
|
20 |
```bash
|
21 |
-
|
22 |
# Install mlx, mlx-examples, huggingface-cli
|
23 |
pip install mlx
|
24 |
pip install huggingface_hub hf_transfer
|
@@ -32,5 +31,5 @@ huggingface-cli download --local-dir Llama-2-7b-chat-mlx mlx-llama/Llama-2-7b-ch
|
|
32 |
python mlx-examples/llama/llama.py Llama-2-7b-chat-mlx/Llama-2-7b-chat.npz Llama-2-7b-chat-mlx/tokenizer.model "My name is "
|
33 |
```
|
34 |
|
35 |
-
Please, refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat
|
36 |
|
|
|
18 |
How to use with [MLX](https://github.com/ml-explore/mlx).
|
19 |
|
20 |
```bash
|
|
|
21 |
# Install mlx, mlx-examples, huggingface-cli
|
22 |
pip install mlx
|
23 |
pip install huggingface_hub hf_transfer
|
|
|
31 |
python mlx-examples/llama/llama.py Llama-2-7b-chat-mlx/Llama-2-7b-chat.npz Llama-2-7b-chat-mlx/tokenizer.model "My name is "
|
32 |
```
|
33 |
|
34 |
+
Please, refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat) for details on Llama 2.
|
35 |
|