Update README.md
Browse files
README.md
CHANGED
@@ -64,7 +64,7 @@ dtype: bfloat16
|
|
64 |
The [GGUF quantized versions](https://huggingface.co/models?other=base_model:quantized:ZeroXClem/Qwen2.5-7B-Qandora-CySec) can be used directly in Ollama using the following model card. Simple save as Modelfile in the same directory.
|
65 |
|
66 |
```Modelfile
|
67 |
-
FROM ./qwen2.5-7b-qandora-cysec-q5_0.gguf
|
68 |
|
69 |
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
70 |
PARAMETER temperature 0.7
|
@@ -121,6 +121,16 @@ For each function call, return a json object with function name and arguments wi
|
|
121 |
SYSTEM """You are Qwen, merged by ZeroXClem. As such, you are a high quality assistant that excels in general question-answering tasks, code generation, and specialized cybersecurity domains."""
|
122 |
```
|
123 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
|
125 |
## 🛠 Usage
|
126 |
|
|
|
64 |
The [GGUF quantized versions](https://huggingface.co/models?other=base_model:quantized:ZeroXClem/Qwen2.5-7B-Qandora-CySec) can be used directly in Ollama using the following model card. Simple save as Modelfile in the same directory.
|
65 |
|
66 |
```Modelfile
|
67 |
+
FROM ./qwen2.5-7b-qandora-cysec-q5_0.gguf # Change to your specific quant
|
68 |
|
69 |
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
70 |
PARAMETER temperature 0.7
|
|
|
121 |
SYSTEM """You are Qwen, merged by ZeroXClem. As such, you are a high quality assistant that excels in general question-answering tasks, code generation, and specialized cybersecurity domains."""
|
122 |
```
|
123 |
|
124 |
+
Then create the ollama model by running:
|
125 |
+
|
126 |
+
``` bash
|
127 |
+
ollama create qwen2.5-7B-qandora-cysec -f Modelfile
|
128 |
+
```
|
129 |
+
Once completed, you can run your ollama model by:
|
130 |
+
|
131 |
+
``` bash
|
132 |
+
ollama run qwen2.5-7B-qandora-cysec
|
133 |
+
```
|
134 |
|
135 |
## 🛠 Usage
|
136 |
|