Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,15 @@ tags:
|
|
6 |
- llama-factory
|
7 |
- llama-cpp
|
8 |
- gguf-my-repo
|
|
|
|
|
9 |
---
|
10 |
|
|
|
|
|
|
|
|
|
|
|
11 |
# kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test-Q8_0-GGUF
|
12 |
This model was converted to GGUF format from [`kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test`](https://huggingface.co/kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
13 |
Refer to the [original model card](https://huggingface.co/kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test) for more details on the model.
|
@@ -50,4 +57,4 @@ Step 3: Run inference through the main binary.
|
|
50 |
or
|
51 |
```
|
52 |
./llama-server --hf-repo kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-uncensored-test-q8_0.gguf -c 2048
|
53 |
-
```
|
|
|
6 |
- llama-factory
|
7 |
- llama-cpp
|
8 |
- gguf-my-repo
|
9 |
+
datasets:
|
10 |
+
- ystemsrx/Bad_Data_Alpaca
|
11 |
---
|
12 |
|
13 |
+
# Warning
|
14 |
+
1.The model is only a fine-tuning test, the actual use of the results may not necessarily be good.
|
15 |
+
|
16 |
+
2.The model is fine-tuned on the Chinese dataset and may work better when used in Chinese.
|
17 |
+
|
18 |
# kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test-Q8_0-GGUF
|
19 |
This model was converted to GGUF format from [`kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test`](https://huggingface.co/kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test) for more details on the model.
|
|
|
57 |
or
|
58 |
```
|
59 |
./llama-server --hf-repo kxdw2580/Qwen2.5-3B-Instruct-Uncensored-Test-Q8_0-GGUF --hf-file qwen2.5-3b-instruct-uncensored-test-q8_0.gguf -c 2048
|
60 |
+
```
|