File size: 339 Bytes
277052a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90a4e19
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

---
license: mit
language:
- en
pipeline_tag: text-generation
---

My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.

Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.

Updated on: Sat Aug 03, 02:49:54