AlejandroOlmedo commited on
Commit
e29ab5d
·
verified ·
1 Parent(s): 760a5d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -35,9 +35,9 @@ I simply converted it to MLX format (using mlx-lm version **0.20.5**.) with a qu
35
  | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
36
  | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
37
 
38
- # Alejandroolmedo/OpenThinker-7B-Q8-mlx
39
 
40
- The Model [Alejandroolmedo/OpenThinker-7B-Q8-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-7B-Q8-mlx) was converted to MLX format from [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) using mlx-lm version **0.20.5**.
41
 
42
  ## Use with mlx
43
 
@@ -48,7 +48,7 @@ pip install mlx-lm
48
  ```python
49
  from mlx_lm import load, generate
50
 
51
- model, tokenizer = load("Alejandroolmedo/OpenThinker-7B-Q8-mlx")
52
 
53
  prompt="hello"
54
 
 
35
  | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
36
  | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
37
 
38
+ # Alejandroolmedo/OpenThinker-7B-8bit-mlx
39
 
40
+ The Model [Alejandroolmedo/OpenThinker-7B-8bit-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-7B-8bit-mlx) was converted to MLX format from [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) using mlx-lm version **0.20.5**.
41
 
42
  ## Use with mlx
43
 
 
48
  ```python
49
  from mlx_lm import load, generate
50
 
51
+ model, tokenizer = load("Alejandroolmedo/OpenThinker-7B-8bit-mlx")
52
 
53
  prompt="hello"
54