AlejandroOlmedo commited on
Commit
acf3de4
·
verified ·
1 Parent(s): 7bd2ae0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -7,9 +7,23 @@ tags:
7
  base_model: zed-industries/zeta
8
  ---
9
 
10
- # Alejandroolmedo/zeta-Q8-mlx
11
 
12
- The Model [Alejandroolmedo/zeta-Q8-mlx](https://huggingface.co/Alejandroolmedo/zeta-Q8-mlx) was converted to MLX format from [zed-industries/zeta](https://huggingface.co/zed-industries/zeta) using mlx-lm version **0.20.5**.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ## Use with mlx
15
 
@@ -20,7 +34,7 @@ pip install mlx-lm
20
  ```python
21
  from mlx_lm import load, generate
22
 
23
- model, tokenizer = load("Alejandroolmedo/zeta-Q8-mlx")
24
 
25
  prompt="hello"
26
 
 
7
  base_model: zed-industries/zeta
8
  ---
9
 
10
+ # **About:**
11
 
12
+ **Tuned from Qwen2.5 coder for coding tasks**
13
+
14
+ - Its a fine-tuned version of Qwen2.5-Coder-7B to support [**__edit prediction__**](https://zed.dev/edit-prediction) in Zed. Fine-tuned using [__zeta dataset__](https://huggingface.co/datasets/zed-industries/zeta).
15
+
16
+ *Special thanks to the folks at Zed Industries for fine-tuning this version of* *Qwen2.5-Coder-7B*. More information about the model can be found here:
17
+
18
+ [https://huggingface.co/zed-industries/zeta](https://huggingface.co/zed-industries/zeta) (Base Model)
19
+
20
+ [https://huggingface.co/lmstudio-community/zeta-GGUF](https://huggingface.co/lmstudio-community/zeta-GGUF) (GGUF Version)
21
+
22
+ I simply converted it to MLX format (using mlx-lm version **0.20.5**.) with a quantization of 8-bit for better performance on Apple Silicon Macs (M1,M2,M3,M4 Chips).
23
+
24
+ # Alejandroolmedo/zeta-8bit-mlx
25
+
26
+ The Model [Alejandroolmedo/zeta-8bit-mlx](https://huggingface.co/Alejandroolmedo/zeta-8bit-mlx) was converted to MLX format from [zed-industries/zeta](https://huggingface.co/zed-industries/zeta) using mlx-lm version **0.20.5**.
27
 
28
  ## Use with mlx
29
 
 
34
  ```python
35
  from mlx_lm import load, generate
36
 
37
+ model, tokenizer = load("Alejandroolmedo/zeta-8bit-mlx")
38
 
39
  prompt="hello"
40