AlejandroOlmedo commited on
Commit
760a5d2
·
verified ·
1 Parent(s): f412117

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -15,6 +15,26 @@ model-index:
15
  results: []
16
  ---
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  # Alejandroolmedo/OpenThinker-7B-Q8-mlx
19
 
20
  The Model [Alejandroolmedo/OpenThinker-7B-Q8-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-7B-Q8-mlx) was converted to MLX format from [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) using mlx-lm version **0.20.5**.
 
15
  results: []
16
  ---
17
 
18
+ # **About:**
19
+
20
+ **A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.**
21
+
22
+ **This model is a fine-tuned version of **[**__Qwen/Qwen2.5-7B-Instruct__**](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)** on the **[**__OpenThoughts-114k dataset__**](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)** dataset. This model improves upon the **[**__Bespoke-Stratos-7B model__**](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B)**, which used 17k examples (**[**__Bespoke-Stratos-17k dataset__**](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)**).**
23
+
24
+ *Special thanks to the folks at Open Thoughts for fine-tuning this version of Qwen/Qwen2.5-7B-Instruct. More information about it can be found here:*
25
+
26
+ [https://huggingface.co/open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) (Base Model)
27
+
28
+ [**__https://github.com/open-thoughts/open-thoughts__**](https://github.com/open-thoughts/open-thoughts) (Open Thoughts Git Repo)
29
+
30
+ I simply converted it to MLX format (using mlx-lm version **0.20.5**.) with a quantization of 8-bit for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
31
+
32
+ ## Other Types:
33
+ | Link | Type | Size| Notes |
34
+ |-------|-----------|-----------|-----------|
35
+ | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-8bit-mlx) | 8-bit | 8.10 GB | **Best Quality** |
36
+ | [MLX] (https://huggingface.co/Alejandroolmedo/OpenThinker-7B-4bit-mlx) | 4-bit | 4.30 GB | Good Quality|
37
+
38
  # Alejandroolmedo/OpenThinker-7B-Q8-mlx
39
 
40
  The Model [Alejandroolmedo/OpenThinker-7B-Q8-mlx](https://huggingface.co/Alejandroolmedo/OpenThinker-7B-Q8-mlx) was converted to MLX format from [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) using mlx-lm version **0.20.5**.