feihu.hf
commited on
Commit
•
a93a50e
1
Parent(s):
a74c2c7
update readme
Browse files
README.md
CHANGED
@@ -78,8 +78,9 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
78 |
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-1.8B-Chat-GPTQ`, `Qwen1.5-1.8B-Chat-AWQ`, and `Qwen1.5-1.8B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
-
##
|
82 |
-
|
|
|
83 |
|
84 |
|
85 |
## Citation
|
|
|
78 |
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-1.8B-Chat-GPTQ`, `Qwen1.5-1.8B-Chat-AWQ`, and `Qwen1.5-1.8B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
+
## Tips
|
82 |
+
|
83 |
+
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
|
84 |
|
85 |
|
86 |
## Citation
|