R136a1 commited on
Commit
2a7b65a
1 Parent(s): 9261568

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ Other quantized models are available from TheBloke: [GGML](https://huggingface.c
16
  | - | 7 | 6.1056 | 2048 max context size for T4 GPU |
17
  | - | 8 | 6.1027 | Just, why? |
18
 
19
- I'll upload the 7 and 8 bits quant if someone request it. (Idk y the 5 bits quant preplexity is lower than higher bits quant, need some test)
20
 
21
  ## Prompt Format
22
 
 
16
  | - | 7 | 6.1056 | 2048 max context size for T4 GPU |
17
  | - | 8 | 6.1027 | Just, why? |
18
 
19
+ I'll upload the 7 and 8 bits quant if someone request it. (Idk y the 5 bits quant preplexity is lower than higher bits quant, I think I did something wrong?)
20
 
21
  ## Prompt Format
22