eachadea commited on
Commit
bb35084
1 Parent(s): 6b1bec2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -9,6 +9,7 @@ inference: false
9
  - 4-bit quantized
10
  - Based on version 1.1
11
  - Used PR "More accurate Q4_0 and Q4_1 quantizations #896" (should be closer in quality to unquantized)
 
12
  - Choosing between q4_0 and q4_1, the logic of higher number \= better does not apply. If you are confused, stick with q4_0.
13
  - If you have performance to spare, it might be worth getting the q4_1. It's ~20% slower and requires 1GB more RAM, but has a ~5% lower perplexity, which is good for generation quality. You're not gonna notice it though.
14
  - If you have *lots* of performance to spare, [TheBloke's conversion](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g-GGML) is maybe ~7% better in perplexity but ~50% slower and requires 2GB more RAM.
 
9
  - 4-bit quantized
10
  - Based on version 1.1
11
  - Used PR "More accurate Q4_0 and Q4_1 quantizations #896" (should be closer in quality to unquantized)
12
+ - For q4_2, "Q4_2 ARM #1046" was used. Will update regularly if new changes are made.
13
  - Choosing between q4_0 and q4_1, the logic of higher number \= better does not apply. If you are confused, stick with q4_0.
14
  - If you have performance to spare, it might be worth getting the q4_1. It's ~20% slower and requires 1GB more RAM, but has a ~5% lower perplexity, which is good for generation quality. You're not gonna notice it though.
15
  - If you have *lots* of performance to spare, [TheBloke's conversion](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g-GGML) is maybe ~7% better in perplexity but ~50% slower and requires 2GB more RAM.