File size: 4,382 Bytes
91c8dde
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f70a7e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91c8dde
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---

# smcleod/Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) for more details on the model.


## Ollama Modelfile (draft/beta!)

```

#  ollama create qwen2.5-coder-7b-instruct:q8_0 -f modelfiles/Modelfile-qwen2.5-coder

FROM ../qwen2.5-coder-7b-instruct-q8_0.gguf

# This is Sam's hacked up template 2024-09-19
TEMPLATE """
{{- $fim_prefix := .FIMPrefix -}}
{{- $fim_suffix := .FIMSuffix -}}
{{- $repo_name := .RepoName -}}
{{- $files := .Files -}}
{{- $has_tools := gt (len .Tools) 0 -}}
{{- if $fim_prefix -}}
<|fim_prefix|>{{ $fim_prefix }}<|fim_suffix|>{{ $fim_suffix }}<|fim_middle|>
{{- else if $repo_name -}}
<|repo_name|>{{ $repo_name }}
{{- range $files }}
<|file_sep|>{{ .Path }}
{{ .Content }}
{{- end }}
{{- else -}}
{{- if or .System $has_tools -}}
<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if $has_tools }}

# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}
<|im_end|>
{{- end }}
{{- if .Messages }}
{{- range $i, $message := .Messages }}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}<|im_start|>assistant
{{- if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{- range .ToolCalls }}
{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{- end }}
</tool_call>
{{- end }}<|im_end|>
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{- end }}
{{- end }}
{{- else if .Prompt -}}
<|im_start|>user
{{ .Prompt }}<|im_end|>
{{- end -}}
<|im_start|>assistant
{{ .Response }}
{{- end -}}
"""

PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|fim_prefix|>"
PARAMETER stop "<|fim_suffix|>"
PARAMETER stop "<|fim_middle|>"
PARAMETER stop "<|repo_name|>"
PARAMETER stop "<|file_sep|>"

### Tuning ##
PARAMETER num_ctx 16384
PARAMETER temperature 0.3
PARAMETER top_p 0.8

#  PARAMETER num_batch 1024
#  PARAMETER num_keep 512
#  PARAMETER presence_penalty 0.2
#  PARAMETER frequency_penalty 0.2
#  PARAMETER repeat_last_n 50
```

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo smcleod/Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo smcleod/Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-7b-instruct-q8_0.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo smcleod/Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo smcleod/Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-7b-instruct-q8_0.gguf -c 2048
```