File size: 4,646 Bytes
4a5b0d8
 
5c45428
 
 
 
 
 
 
 
4a5b0d8
 
 
464aa54
4a5b0d8
 
ca2c162
4a5b0d8
 
 
c95a742
4a5b0d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbf08c3
dbd87bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca2c162
 
 
 
 
 
8057e61
ca2c162
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- phi3
- conversational
- custom_code
---
# Quantized Octo-planner: On-device Language Model for Planner-Action Agents Framework

This repo includes **GGUF** quantized models, for our Octo-planner model at [NexaAIDev/octopus-planning](https://huggingface.co/NexaAIDev/octopus-planning)


# GGUF Quantization

To run the models, please download them to your local machine using either git clone or [Hugging Face Hub](https://huggingface.co/docs/huggingface_hub/en/guides/download)
```
git clone https://huggingface.co/NexaAIDev/octo-planner-gguf
```

## Run with [llama.cpp](https://github.com/ggerganov/llama.cpp) (Recommended) 

1. **Clone and compile:**

```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# Compile the source code:
make
```

2. **Execute the Model:**

Run the following command in the terminal:

```bash
./llama-cli -m ./path/to/octopus-planning-Q4_K_M.gguf -p "<|user|>Find my presentation for tomorrow's meeting, connect to the conference room projector via Bluetooth, increase the screen brightness, take a screenshot of the final summary slide, and email it to all participants<|end|><|assistant|>"
```


## Run with [Ollama](https://github.com/ollama/ollama)

Since our models have not been uploaded to the Ollama server, please download the models and manually import them into Ollama by following these steps:

1. Install Ollama on your local machine. You can also following the guide from [Ollama GitHub repository](https://github.com/ollama/ollama/blob/main/docs/import.md)

```bash
git clone https://github.com/ollama/ollama.git ollama
```

2. Locate the local Ollama directory:
```bash
cd ollama
```

3. Create a `Modelfile` in your directory
```bash
touch Modelfile
``` 

4. In the Modelfile, include a `FROM` statement with the path to your local model, and the default parameters:

```bash
FROM ./path/to/octopus-planning-Q4_K_M.gguf
```

5. Use the following command to add the model to Ollama:
```bash
ollama create octopus-planning-Q4_K_M -f Modelfile
```

6. Verify that the model has been successfully imported:
```bash
ollama ls
```

7. Run the mode
```bash
ollama run octopus-planning-Q4_K_M "<|user|>Find my presentation for tomorrow's meeting, connect to the conference room projector via Bluetooth, increase the screen brightness, take a screenshot of the final summary slide, and email it to all participants<|end|><|assistant|>"
```


# Quantized GGUF Models Benchmark

| Name                         | Quant method | Bits | Size     | Use Cases                           |
| ---------------------------- | ------------ | ---- | -------- | ----------------------------------- |
| octopus-planning-Q2_K.gguf   | Q2_K         | 2    | 1.42 GB  | fast but high loss, not recommended |
| octopus-planning-Q3_K.gguf   | Q3_K         | 3    | 1.96 GB  | extremely not recommended           |
| octopus-planning-Q3_K_S.gguf | Q3_K_S       | 3    | 1.68 GB  | extremely not recommended           |
| octopus-planning-Q3_K_M.gguf | Q3_K_M       | 3    | 1.96 GB  | moderate loss, not very recommended |
| octopus-planning-Q3_K_L.gguf | Q3_K_L       | 3    | 2.09 GB  | not very recommended                |
| octopus-planning-Q4_0.gguf   | Q4_0         | 4    | 2.18 GB  | moderate speed, recommended         |
| octopus-planning-Q4_1.gguf   | Q4_1         | 4    | 2.41 GB  | moderate speed, recommended         |
| octopus-planning-Q4_K.gguf   | Q4_K         | 4    | 2.39 GB  | moderate speed, recommended         |
| octopus-planning-Q4_K_S.gguf | Q4_K_S       | 4    | 2.19 GB  | fast and accurate, very recommended |
| octopus-planning-Q4_K_M.gguf | Q4_K_M       | 4    | 2.39 GB  | fast, recommended                   |
| octopus-planning-Q5_0.gguf   | Q5_0         | 5    | 2.64 GB  | fast, recommended                   |
| octopus-planning-Q5_1.gguf   | Q5_1         | 5    | 2.87 GB  | very big, prefer Q4                 |
| octopus-planning-Q5_K.gguf   | Q5_K         | 5    | 2.82 GB  | big, recommended                    |
| octopus-planning-Q5_K_S.gguf | Q5_K_S       | 5    | 2.64 GB  | big, recommended                    |
| octopus-planning-Q5_K_M.gguf | Q5_K_M       | 5    | 2.82 GB  | big, recommended                    |
| octopus-planning-Q6_K.gguf   | Q6_K         | 6    | 3.14 GB  | very big, not very recommended      |
| octopus-planning-Q8_0.gguf   | Q8_0         | 8    | 4.06 GB  | very big, not very recommended      |
| octopus-planning-F16.gguf    | F16          | 16   | 7.64 GB  | extremely big                       |

_Quantized with llama.cpp_