AlejandroOlmedo commited on
Commit
2e29eb8
·
verified ·
1 Parent(s): 632dd70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -15,9 +15,23 @@ model-index:
15
  results: []
16
  ---
17
 
18
- # Alejandroolmedo/OpenThinker-32B-Q8-mlx
19
 
20
- The Model [Alejandroolmedo/OpenThinker-32B-Q8-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-32B-Q8-mlx) was converted to MLX format from [open-thoughts/OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) using mlx-lm version **0.20.5**.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ## Use with mlx
23
 
@@ -28,7 +42,7 @@ pip install mlx-lm
28
  ```python
29
  from mlx_lm import load, generate
30
 
31
- model, tokenizer = load("Alejandroolmedo/OpenThinker-32B-Q8-mlx")
32
 
33
  prompt="hello"
34
 
 
15
  results: []
16
  ---
17
 
18
+ # About:
19
 
20
+ **A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.**
21
+
22
+ **This model is a fine-tuned version of **[**__Qwen/Qwen2.5-32B-Instruct__**](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)** on the **[**__OpenThoughts-114k__**](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)** dataset.**
23
+
24
+ *Special thanks to the folks at Open Thoughts for fine-tuning this version of Qwen/Qwen2.5-32B-Instruct. More information about it can be found here:*
25
+
26
+ [https://huggingface.co/open-thoughts/OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) (Base Model)
27
+
28
+ [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts) (Open Thoughts Git Repo)
29
+
30
+ I simply converted it to MLX format with a quantization of 8-bit for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
31
+
32
+ # Alejandroolmedo/OpenThinker-32B-8bit-mlx
33
+
34
+ The Model [Alejandroolmedo/OpenThinker-32B-8bit-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-32B-8bit-mlx) was converted to MLX format from [open-thoughts/OpenThinker-32B](https://huggingface.co/open-thoughts/OpenThinker-32B) using mlx-lm version **0.20.5**.
35
 
36
  ## Use with mlx
37
 
 
42
  ```python
43
  from mlx_lm import load, generate
44
 
45
+ model, tokenizer = load("Alejandroolmedo/OpenThinker-32B-8bit-mlx")
46
 
47
  prompt="hello"
48