apepkuss79 commited on
Commit
add6174
1 Parent(s): 5000ece

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -25,6 +25,10 @@ quantized_by: Second State Inc.
25
 
26
  prompt template: `chatml`
27
 
 
 
 
 
28
  **Context size**
29
 
30
  chat_ctx_size: `16384`
@@ -53,4 +57,4 @@ chat_ctx_size: `16384`
53
  | [Yi-1.5-9B-Chat-16K-Q8_0.gguf](https://huggingface.co/gaianet/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q8_0.gguf) | Q8_0 | 8 | 6.44 GB| very large, extremely low quality loss - not recommended |
54
  | [Yi-1.5-9B-Chat-16K-f16.gguf](https://huggingface.co/gaianet/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-f16.gguf) | f16 | 16 | 17.7 GB| |
55
 
56
- *Quantized with llama.cpp b2824*
 
25
 
26
  prompt template: `chatml`
27
 
28
+ **Reverse prompt**
29
+
30
+ reverse prompt: `<|im_end|>`
31
+
32
  **Context size**
33
 
34
  chat_ctx_size: `16384`
 
57
  | [Yi-1.5-9B-Chat-16K-Q8_0.gguf](https://huggingface.co/gaianet/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q8_0.gguf) | Q8_0 | 8 | 6.44 GB| very large, extremely low quality loss - not recommended |
58
  | [Yi-1.5-9B-Chat-16K-f16.gguf](https://huggingface.co/gaianet/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-f16.gguf) | f16 | 16 | 17.7 GB| |
59
 
60
+ *Quantized with llama.cpp b3135*